Hike News
Hike News

[Cracking the Coding Interview] Tree

List of Depths: Given a binary tree, design an algorithm which creates a linked list of all the nodes
at each depth (e.g., if you have a tree with depth 0, you’ ll have 0 linked lists).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class ListNode:
def __init__(self, x):
self.val = x
self.next = None
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class TreeLevel:
def getTreeLevel(self, root, dep):

s=ListNode(-1)
p=s
if dep<=0 or not root:
return None
if dep==1:
p.next=ListNode(root.val)
p=p.next
else:
self.getTreeLevel(root.left, dep-1)
self.getTreeLevel(root.right, dep-1)
return s.next

Check Balanced: Implement a function to check if a binary tree is balanced. For the purposes of this question, a balanced tree is defined to be a tree such that the heights of the two subtrees of any node never differ by more than one.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
  class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Checker:
def checkBST(self, root):
self.arr = []
self.dfs(root)
return self.arr == sorted(self.arr)

def dfs(self, root):
if not root:
return
self.dfs(root.left)
self.arr.append(root.val)
self.dfs(root.right)

Successor: Write an algorithm to find the “next” node (i .e ., in-order successor ) of a given node in a binary search tree. You may assume that each node has a link to its parent.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
  class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Successor:
def findSucc(self, root, p):
# write code here
self.arr=[]
self.dfs(root)
return self.arr[self.arr.index(p)+1]

def dfs(self,root):
if not root:
return
self.dfs(root.left)
self.arr.append(root.val)
self.dfs(root.right)

Paths with Sum: You are given a binary tree in which each node contains an integer value (which might be positive or negative). Design an algorithm to count the number of paths that sum to a given value. The path does not need to start or end at the root or a leaf, but it must go downwards (traveling only from parent nodes to child nodes).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution:

def FindPath(self, root, expectNumber):

res = []
paths = self.dfs(root)
return [a for a in paths if sum(a)==expectNumber]

def dfs(self, root):
if root is None: return []
if not root.left and not root.right:
return [[root.val]]

paths = [[root.val] + path for path in self.dfs(root.left)]+[[root.val] + path for path in self.dfs(root.right)]

return paths

[Cracking the Coding Interview] Arrays and Strings

String Compression: Implement a method to perform basic string compression using the counts of repeated characters. For example, the string aabcccccaaa would become a2b1c5a3. If the “compressed” string would not become smaller than the original string, your method should returnthe original string. You can assume the string has only uppercase and lowercase letters (a - z).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
    # -*- coding:utf-8 -*-
class Zipper:
def zipString(self, iniString):
# write code here


zipStr = ""
strCnt = 1
for i in range(len(iniString) - 1):
if iniString[i + 1] == iniString[i]:
strCnt += 1
else:
zipStr += iniString[i] + str(strCnt)
strCnt = 1
zipStr += iniString[-1] + str(strCnt)
return zipStr if len(zipStr) < len(iniString) else iniString

URLify: Write a method to replace all spaces in a string with ‘%20: You may assume that the string has sufficient space at the end to hold the additional characters, and that you are given the “true” length of the string.

1
2
3
4
5
   # -*- coding:utf-8 -*-
class Replacement:
def replaceSpace(self, iniString, length):
# write code here
return iniString.replace(" ","%20")

Zero Matrix: Write an algorithm such that if an element in an MxN matrix is 0, its entire row and column are set to 0.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# -*- coding:utf-8 -*-
class Clearer:
def clearZero(self, mat, n):
# write code here
row = set()
col = set()
for r, m in enumerate(mat):
for c, n in enumerate(m):
if not n:
row.add(r)
col.add(c)
for r in row:
for i in range(len(mat[r])):
mat[r][i] = 0
for c in col:
for i in range(len(mat)):
mat[i][c] = 0
return mat

String Rotation: Assume you have a method isSubst ring which checks if one word is a substring of another. Given two strings, 51 and 52, write code to check if 52 is a rotation of 51 using only one call to isSubstring (e.g., “waterbottle “ is a rotation of “erbottlewat”).

1
2
3
4
5
# -*- coding:utf-8 -*-
class ReverseEqual:
def checkReverseEqual(self, s1, s2):
# write code here
return set(s1)==set(s2)

Linear Algebra (I)

The core of linear algebra is the equation Ax=b. In the column and matrix pictures, the right hand side of the equation is a vector b. Given a matrix A, do the linear combinations of its column vectors fill the xy-plane (or space, in the three dimensional case)?If the answer is “no”, we say that A is a singular matrix. In this singular case its column vectors are linearly dependent.

From vectors to matrices to subspaces

Vectors

Vectors just like a combination of scalars.
An example in $R^3$ would be:

Matrices

Matrices just like a combination of vextors.
An example would be:

Vector space/Subspace

A vector space is a collection of vectors that is closed under linear combina­tions. Given a matrix A with columns in $R^3$, these columns and all their linear combinations form a subspace of $R^3$.This is the column space C(A). A subspace is a vector space inside another vector space.
The subspaces of $R^2$ are:

  • all of $R^2$;
  • any line through origin;
  • the zero vector alone.
    The subspaces of $R^3$ are:
  • all of $R^3$;
  • any plane through the origin;
  • any line through the origin;
  • the zero vector alone.

Elimination with matrices

Gauss elimination

So question rises that how to judge wether a matrice is singular or not, in other words, can the linear combination of vectors (vector space) of the matrice equals $R^n$?
We repeat the multiplications and substractions (Gauss’ elimination method)on the row of matrices to get an upper triangular matrix. An example would be:

We define the first number that not equalsto zero in each row as pivot. The number of pivots in each upper triangular matrix is the rank of the matrix. If the rank is smaller than the dimension of each vector (the column space of the matrice), we can say that the matrice is singular, so the equation Ax=b may have no solution. If the row number equals the column number(square matrices), and the matrice can be transfered to matrix $I$, we define that the matrix can be inversed, the inverse matrix is writen as $A^-1$. in other words, $A^(-1)A=I$.

Once we have used Gauss’ elimination method to convert the original matrix to upper triangular form, we go on to use Gauss-Jordan elimination eliminating entries in the upper right portion of the matrix. This is the method to get the inverse matrix.

Factorization into $A=LU$

The Four Spaces

Nullspace and Column Space

The nullspaceof a matrix $A$ is the collection of all solutions $x$ that make $Ax=0$. The nullspace is a subspace.

How to compute the nullspace?

Elimination steps make a echelon form (upper triganle matrix) $U$. If the rank of the matrix is smaller than space dimension (r<m, a matrix with m rows and n columns), there exist m-r free columns. Suppose these columns equal one (others zero) respectively, we can get the whole nullspace (the linear combinations of the results). On the other hand, by continuing to use the method of elimination we can convert $U$ to a matrix $R$ in reduced row echelon form (rref form), with pivots equal to 1 and zeros above and below the pivots.

The application of the nullspace

We can use the nullspace to compute the complete solution of any $Ax=b$ by combining a particular solution (set all free variables to zero) and the nullspace.

Row space and left nullspace

The row space and left nullspace is the column space and nullspace of the transpose of matrix $A$.

Orthogonality

Equation: Suppose a $m \times n$ matrix $A$, we get:
$ rank(A)= number of pivot columns of A = dimensions of C(A) $
$= m- dimension of N(A) =m- number of free variables $

$ rank(A^T)= number of pivot columns of A^T = dimensions of R(A) $
$=n- dimension of N(A^T) =m- number of free variables $

We know that $Subspace S$ is orthogonal to $Subspace T$ means: every vector in $S$ is orthogonal to every vector in $T$.
The row space of a matrix is orthogonal to the nullspace, because $Ax=0$ means the dot product of $x$ with each row of $A$ is 0. But then the product of $x$ with any combination of rows of $A$ must be 0. The column space is orthogonal to the left nullspace of $A$ because the row space of $A^T$ is perpendicular to the nullspace of $A^T$.

Projection

We’d like to write the projection in terms of a projection matrix $P$: $p=Pb$.
$P=\frac{aa^T}{a^Ta}$

The application of projection

Stack,Queue and linked list

StackA stack (sometimes called a “push-down stack”) is an ordered collection of items where the addition of new items and the removal of existing items always takes place at the same end. This end is commonly referred to as the “top.” The end opposite the top is known as the “base.” The base of the stack is significant since items stored in the stack that are closer to the base represent those that have been in the stack the longest. The most recently added item is the one that is in position to be removed first. This ordering principle is sometimes called LIFO, last-in first-out. It provides an ordering based on length of time in the collection. Newer items are near the top, while older items are near the base.

Continue Reading →

Neural Networks

Neural networks have a large circle of acquaintance engaged in mathematics, engineering, economics and many others due to their excellent performance in function approximation, pattern recognition, associative memories, forecasting and generation of new meaningful patterns. Applications of neural networks in wastewater treatment systems require full understandings of these all. Therefore, for further comprehension of this methodology, this section provides a preliminary understanding of neural networks starting with their development and structure, and emphasizes more on their classification and technical coupling with other machine learning methods.

Evolution: from artificial neuron to deep learning

The past six decades have witnessed ups and downs in the development of neural networks. The modern era of neural networks began with the pioneering work of McCulloch and Pitts (1943), who proposed a neurophysiology and mathematics based model that appeared to calculate any computable functions. The next major advance came in six years later that explicit statement of a physiological learning rule for synaptic modification was presented to support for the development of the following computational models. After that, the golden age prelude opened. Practical neuron-computers were developed one after another, the most classical of which were recognized as the perceptron and adaptive linear element respectively. With a booming in the research of neural networks, intelligent learning machines seemed to be created already. However, an analysis of the weakness of the perceptron when confronting with nonlinearly problems put an end to this overestimation. The enthusiasm faded slowly combined with declining research funds at the same time, however, the prolonged silence built the foundation of theories for the still continuing renaissance. Until the introduction of the back-propagation of error learning procedure did the research of neural networks whipped into its right track again. Recently, although originally viewed with scepticism, neural networks have undergone a renaissance in the form of deep learning ( such as Deep Feedforward Networks, Convolutional Neural Networks, Generative Adversarial Networks and AutoEncoder), as a result of the development of novel training rules, an expansion in the number of layers, the access of large-scale datasets and better hardware implementations. Although researches based on the principles of the neuron doctrine are far from being finished, neural networks in computer science, benefiting from their overwhelming self-adaptability, self-organization and self-learning, have stretched into every walk of life.

Hierarchy: from node to network

Biologically, an activated neuron acts like an interchange station transferring chemical substance to the connected ones. Chemical substance converts the electric potential in neurons, and we define this neuron is activated if the electric potential exceeds a specific threshold. Similarly, the neuron in computer science is just an information-process unit derives from the biological neuron (Haykin, 1998). It consists of three basic elements:
i. A bunch of weighted connections referred to as w_ij and linking two neurons i and j;
ii. An accumulator calculating a weighted sum of input signals x_ij;
iii. An activation function referred to as f and judging whether or not to send its activation value in turn down to other connecting ones depending on difference between the weighted sum and a threshold value.

The most common activation functions are summarized in Table.2.
In mathematical terms, taking the threshold value in the form of bias into consideration, the output of a neuron i can be described by the following equation:
y_i=f(∑_1^m▒w_ij x_ij+bias)
Neurons are the element of neural networks which combines neurons in a specific network topology. Generally, in the network, data are introduced to the input layer with further procession in the following layers and constitution of an overall response to the initial inputs in the output layer (Haykin, 1998; Kriesel, 2007). Neural networks consisting of three or more fully/partly connected layers (an input and an output layer with one or more hidden layers) of linearly/nonlinearly-activating nodes are described as multilayer perceptron (MLP), which is the most basic and frequently used form of neural networks. The information learned by neural networks is stored in connecting weights and bias, which are elementary parameters of neural networks. Naturally, the number of layers and the number of nodes per layer are meta-parameters. The learning process adjusting elementary parameters and validation process to optimize meta-parameters will be introduced in Section 3.

Classification: supervised or unsupervised

Benefit from their respective topologies, different networks show distinct advantages in solving different problems. Traditionally, neural networks are classified based on the following categories : (i) topological structure involved in the information flow of networks, (ii) the degree of learning supervision, (iii) the learning algorithms. In this section, we describe several popular networks briefly in the order of learning supervision with some typical topological structure introduced in sub-classifications. Meanwhile, considering complicated issues and special requirements in wastewater treatment, hybrid frameworks, not a particular type, are added to illustrate the strong technical coupling of neural networks with other machine learning methods.

Supervised Learning

Supervised learning means inferring a model from labeled training data (Mohri et al., 2012). Among supervised learning, feed-forward neural networks (FFNNs) and recurrent neural networks (RNNs) are representatives of two disparate topological structures. FFNNs consist of neurons organized in layers with information flowing forward, from the input layer, through the hidden layer(s) and to the output layer. Each neuron in each layer is always completely linked to those in the neighboring layer (Fig.3). Back-propagation based FFNNs, due to their efficiency, conciseness and flexibility, are the most commonly used type (Basheer and Hajmeer, 2000). Back-propagation means error is transmitted in the opposite direction against data flow. Back-propagation algorithm was created by Paul Werbos and reorganized by Rumelhart et al (Rumelhart, 1986a; Werbos, 1974). So important it is that we must introduce back-propagation in detail in Section 3 when talking about model training. Radial basis function (RBF) networks are special cases of three-layered back-propagation based networks but always listed out separately to make comparisons with the back-propagation based networks. RBF networks employ RBFs (such as Gaussian kernel, Multiquadric and Inverse quadratic) working as activation functions in the sole hidden layer to cluster inputs of the network and implement a linear combination of RBFs in the output layer (Park and Sandberg, 1993, 1991). RBF networks are trained faster than back-propagation based ones but not as versatile (Basheer and Hajmeer, 2000). Convolutional Neural Networks are a specialized kind of deep, feed-forward networks for processing data known as grid-like topology (Goodfellow et al., 2016). The shared-weights architecture and translation invariance characteristics achieved by convolutional layer and pooling layer make them tremendously successful in practical applications involved in computer vision. Generative Adversarial Networks is a combination of twins sub-networks working together: one generates content and the other judges it. The discriminating network receives either training data or generated content from the generative network. Its discriminating ability is sent to the generating network as a feedback, which creates a form of competition making the discriminator work better at distinguishing real data from generated data and the generator learn to become less predictable to the discriminator. Different from FFNNs, RNNs make output signals fed back to neurons in the same or previous layers. The information and connection feedbacks allow the current state of RNNs to depend not only on the current inputs but also on the network state in the previous time steps. Therefore, the dynamic memory of RNNs plays an important role in solving changes related to time variations. The most common ones are Elman networks, combing hidden-layered neuron’s outputs with signals from input layer next time step as inputs of neurons in the hidden layer (Williams and Zipser, 1989). Another typical RNNs, Hopfield neural networks, are two-layered networks with an energy function serving as guarantee to converge to local minimums, which makes them efficient in solving optimization problems (Sathasivam, 2008). Long/short term memory (LSTM) networks is a kind of RNNs equipped with memory cells, each of which has three gates: input, output and forget. The input gate and the output gate determine strength of the information flow from the previous layer and to the following layer, respectively. The forget gate determines how much information in the current cell to forget. With the adjustable memory, LSTMs have been proved to be able to learn complex interrelationship from sequence problems.

Unsupervised Learning

Unsupervised learning means inferring a model to describe the hidden structure from unlabeled training data (Mohri et al., 2012). Networks affiliated to unsupervised learning adaptively update a certain bunch of weights related to the winning output neuron, which derives from competitions between all output-layered neurons (Basheer and Hajmeer, 2000). The most common are Self-Organizing Maps (SOMs), which are also known as Kohonen networks. In SOMs, output-layered neurons are not isolated but interconnected with the neighboring ones in the form of two or three dimensional matrix (Kalteh et al., 2008; Kohonen, 1982). It is convenient for mapping input data to a low dimensional space but the internal topological structure of high-dimensional characteristics is maintained in parallel, which provides SOMs significant advantages in clustering and data compression (Kohonen and Honkela, 2007). Adaptive resonance theory (ART) networks, another frequently-used ones, do not modify the learned information stored in the weight vectors when presented with a new pattern but enlarge memory capacity synchronously with the increase of patterns (Carpenter and Grossberg, 2003; Grossberg, 2013). AutoEncoder is a typical unsupervised learning model attached to deep learning. It is trained to copy its input to its output by learning to compress data from the input layer into a short code and then to uncompress that code into something matching the original data. This forces the AutoEncoder to engage in dimensionality reduction and learning how to ignore noise.

Hybrid frameworks

Hybrid frameworks are not a particular kind of neural networks, but one combining traditional neural networks with other machine learning methods (e.g. fuzzy system, reinforcement learning, or GA) to dovetail their respective superiority. Most of the developments on hybrid GA and neural networks focus on the exploitation of an enhancement to the design of neural networks, especially in the determination of the network structure. GAs evolve network topologies, starting with a set of arbitrary parameters and ensure the global optimal of the network topology. Likewise, particle swarm optimization (PSO), another type of global and population-based algorithm, has the same capability in improving the design of neural networks as that of GAs. Reinforcement learning, a kind of data-free method, pursues the maximum reward of adjusted actions based on the relationship between agent and environment. Therefore, it is perfect for system control especially for a lack of training data to implement supervised learning methods. Fuzzy neural networks, due to the fuzzy knowledge base brought by fuzzy system, are accomplished in expressing the fuzzy rules that are required during system control. Apparently, hybrid frameworks have the capacity to deal with more complex issues but are recommended to be avoided if single networks perform well enough, because complicated and coupled model structure impose extra computational burden.

Indeed, there are no “one-size-fit-all” neural networks, but a relatively better choice, just like a key to a lock, does exist for a given problem. SOMs work better in classification and data visualization; an optimization problem requires Hopfield networks; back-propagation and RBF based networks may be appropriate for forecasting and controlling; RNNs are perfect for time series problems. Therefore, the selection of neural network architecture foreshadows the effective handling of the problem to be resolved.

【SQL】 MySQL Basic

SELECT

1
SELECT Column_Name FROM Table_Name
SELECT * FROM Table_Name
SELECT DISTINCT Column_Name FROM Table_Name
SELECT DISTINCT Column_Name FROM Table_Name
SELECT Column_Name FROM Table_Name LIMIT Number
SELECT Column_Name FROM Table_Name LIMIT Number
SELECT Function(Column_Name) FROM Table_Name 
  • avg()
  • count()
  • first()
  • last()
  • max()
  • min()
  • sum()

    Prerequisite

    SELECT Column_Name FROM Table_Name WHERE Condition
    





















    Option Description
    >,<,>=,<= larger/smaller than a specific value
    =,!= equal or unequal to a specific value
    BETWEEN in a certain range
    LIKE search in a certain area, always combined with wildcard

    Sort

    SELECT Column_Name FROM Table_Name ORDER BY Column_Name
    

    Alias

    SELECT Column_Name AS Alias FROM Table_Name
    
    SELECT Column_Name FROM Table_A_Name AS A JOIN Table_B_Name AS B