 Research
 Open Access
 Published:
Nonlinear regression without i.i.d. assumption
Probability, Uncertainty and Quantitative Risk volume 4, Article number: 8 (2019)
Abstract
In this paper, we consider a class of nonlinear regression problems without the assumption of being independent and identically distributed. We propose a correspondent minimax problem for nonlinear regression and give a numerical algorithm. Such an algorithm can be applied in regression and machine learning problems, and yields better results than traditional least squares and machine learning methods.
Introduction
In statistics, linear regression is a linear approach for modelling the relationship between a response variable y and one or more explanatory variables denoted by x:
Here, ε is a random noise. The associated noise terms \(\{\varepsilon _{i}\}_{i=1}^{m}\) are assumed to be i.i.d. (independent and identically distributed) with mean 0 and variance σ^{2}. The parameters w,b are estimated via the method of least squares as follows.
Lemma 1
Suppose \(\{(x_{i},y_{i})\}_{i=1}^{m}\) are drawn from the linear model ( 1 ). Then the result of least squares is
Here,
A^{+} is the Moore −Penrose inverse^{Footnote 1} of A.
In the above lemma, ε_{1},ε_{2},⋯,ε_{m} are assumed to be i.i.d. Therefore, y_{1},y_{2},⋯,y_{m} are also i.i.d.
When the i.i.d. assumption is not satisfied, the usual method of least squares does not work well. This is illustrated by the following example.
Example 1
Denote by \(\mathcal {N}\left (\mu,\sigma ^{2}\right)\) the normal distribution with mean μ and variance σ^{2} and denote by δ_{c} the Dirac distribution, i.e.,
Suppose the sample data are generated by
where
The result of the usual least squares is
which is displayed in Fig. 1 .
We see from Fig. 1 that most of the sample data deviates from the regression line. The main reason is that the i.i.d. condition is violated.
For overcoming the above difficulty, Lin et al. (2016) studied the linear regression without i.i.d. condition by using the nonlinear expectation framework laid out by Peng (2005). They split the training set into several groups and in each group the i.i.d. condition can be satisfied. The average loss is used for each group and the maximum of average loss among groups is used as the final loss function. They show that the linear regression problem under the nonlinear expectation framework is reduced to the following minimax problem.
They suggest a genetic algorithm to solve this problem. However, such a genetic algorithm does not work well generally.
Motivated by the work of Lin et al. (2016) and Peng (2005), we consider nonlinear regression problems without the assumption of i.i.d. in this paper. We propose a correspondent minimax problems and give a numerical algorithm for solving this problem. Meanwhile, problem (2) in Lin’s paper can also be well solved by such an algorithm. We also have done some experiments in least squares and machine learning problems.
Nonlinear regression without i.i.d. assumption
Nonlinear regression is a form of regression analysis in which observational data are modeled by a nonlinear function which depends on one or more explanatory variables (see, e.g., Seber and Wild (1989)).
Suppose the sample data (training set) is
where x_{i}∈X and y_{i}∈Y. X is called the input space and Y is called the output (label) space. The goal of nonlinear regression is to find (learn) a function g^{θ}:X→Y from the hypothesis space {g^{λ}:X→Yλ∈Λ} such that g^{θ}(x_{i}) is as close to y_{i} as possible.
The closeness is usually characterized by a loss function φ such that φ(g^{θ}(x_{1}),y_{1},⋯,g^{θ}(x_{m}),y_{m}) attains its minimum if and only if
Then the nonlinear regression problem (learning problem) is reduced to an optimization problem of minimizing φ.
Following are two kinds of loss functions, namely, the average loss and the maximal loss.
The average loss is popular, particularly in machine learning, since it can be conveniently minimized using online algorithms, which process fewer instances during each iteration. The idea behinds the average loss is to learn a function that performs equally well for each training point. However, when the i.i.d. assumption is not satisfied, the average loss function method may become a problem.
To overcome this difficulty, we use the maxmean as the loss function. First, we split the training set into several groups and in each group the i.i.d. condition can be satisfied. Then, the average loss is used for each group and the maximum of average loss among groups is used as the final loss function. We propose the following minimax problem for nonlinear regression problems.
Here, n_{j} is the number of samples in group j.
Problem (3) is a generalization of problem (2). Next, we will give a numerical algorithm which solves problem (3).
Remark 1
Jin and Peng (2016) put forward a maxmean method to give the parameter estimation when the usual i.i.d. condition is not satisfied. They show that if Z_{1},Z_{2},⋯,Z_{k} are drawn from the maximal distribution \(M_{[\underline {\mu },\overline {\mu }]}\) and are nonlinearly independent, then the optimal unbiased estimation for \(\overline {\mu }\) is
This fact, combined with the Law of Large Numbers (Theorem 19 in Jin and Peng (2016)) leads to the maxmean estimation of μ. We borrow this idea and use the maxmean as the loss function for the nonlinear regression problem.
Algorithm
Problem (3) is a minimax problem. The minimax problems arise in different kinds of mathematical fields, such as game theory and the worstcase optimization. The general minimax problem is described as
Here, h is continuous on \(\mathbb {R}^{n}\times V\) and differentiable with respect to u.
Problem (4) was considered theoretically by Klessig and Polak (1973) in 1973 andPanin (1981) in 1981. Later in 1987,Kiwiel (1987) gave a concrete algorithm for problem (4). Kiwiel’s algorithm dealt with the general case in which V is a compact subset of \(\mathbb {R}^{d}\) and the convergence could be slow when the number of parameters is large.
In our case, V={1,2,⋯,N} is a finite set and we give a simplified and faster algorithm.
Denote
Suppose each f_{j} is differentiable. Now, we outline the iterative algorithm for the following discrete minimax problem
The main difficulty is to find the descent direction at each iteration point u_{k}(k=0,1,⋯) since Φ is nonsmooth in general. In light of this, we linearize f_{j} at u_{k} and obtain the convex approximation of Φ as
Next, we find u_{k+1}, which minimizes \(\hat {\Phi }(u)\). In general, \(\hat {\Phi }\) is not strictly convex with respect to u, and thus it may not admit a minimum. Motivated by the alternating direction method of multipliers (ADMM, see, e.g.,Boyd et al. (2010);Kellogg (1969)), we add a regularization term and the minimization problem becomes
By setting d=u−u_{k}, the above is converted to the following form
which is equivalent to
Problem (6) −(7) is a semidefinite QP (quadratic programming) problem. When n is large, the popular QP algorithms (such as the activeset method) are timeconsuming. So we turn to the dual problem.
Theorem 1
Denote \(G=\nabla f\in \mathbb {R}^{N\times n},f=(f_{1},\cdots,f_{N})^{T}\). If λ is the solution of the following QP problem
Then d=−G^{T}λis the solution of problem ( 6 ) −( 7 ).
Proof
See Appendix. □
Remark 2
Problem ( 8 ) −( 9 ) can be solved by many standard methods, such as activeset method (see, e.g., (Nocedal and Wright 2006)). The dimension of the dual problem ( 8 ) −( 9 ) is N (number of groups), which is independent of n (number of parameters). Hence, the algorithm is fast and stable, especially in deep neural networks.
Set d_{k}=−G^{T}λ. The next theorem shows that d_{k} is a descent direction.
Theorem 2
If d_{k}≠0, then there exists t_{0}>0 such that
Proof
See Appendix. □
For a function F, the directional derivative of F at x in a direction d is defined as
The necessary optimality condition for a function F to attain its minimum (seeDemyanov and Malozemov (1977)) is
x is called a stationary point of F.
Theorem 2 shows that when d_{k}≠0, we can always find a descent direction. The next theorem reveals that when d_{k}=0,u_{k} is a stationary point.
Theorem 3
If d_{k}=0, then u_{k} is a stationary point of Φ, i.e.,
Proof
See Appendix. □
Remark 3
When each f_{j} is a convex function, Φ is also a convex function. Then, the stationary point of Φ becomes the global minimum point.
With d_{k} being the descent direction, we use line search to find the appropriate step size and update the iteration point.
Now, let us conclude the above discussion by giving the concrete steps of the algorithm for the following minimax problem.
Algorithm.
Step 1. Initialization
Select arbitrary \(u_{0}\in \mathbb {R}^{n}\). Set k=0, termination accuracy ξ=10^{−8}, gap tolerance δ=10^{−7}, and step size factor σ=0.5.
Step 2. Finding Descent Direction
Assume that we have chosen u_{k}. Compute the Jacobian matrix
where
Solve the following quadratic programming problem with gap tolerance δ (see, e.g., Nocedal and Wright (2006)).
Take d_{k}=−G^{T}λ. If ∥d_{k}∥<ξ, stop. Otherwise, goto Step 3.
Step 3. Line Search
Find the smallest natural number j such that
Take α_{k}=σ^{j} and set u_{k+1}=u_{k}+α_{k}d_{k}, k=k+1. Go to Step 2.
Experiments
The linear regression case
Example 1 can be numerically well solved by the above algorithm with
The corresponding optimization problem is
The numerical result using the algorithm in Section 1 is
The result is summarized in Fig. 2. Note that the minimax method (black line) performs better than the traditional least squares method (pink line).
Next, we compare the two methods. Both l^{2} distance and l^{1} distance are used as measurements.
We see from table 1 that minimax method outperforms the traditional method in both l^{2} and l^{1} distances.
Lin et al. (2016) have mentioned that the above problem can be solved by genetic algorithms. However, the genetic algorithm is heuristic and unstable especially when the number of groups is large. In contrast, our algorithm is fast and stable and the convergence is proved.
The machine learning case
We further test the proposed method by using the CelebFaces Attributes Dataset (CelebA)^{Footnote 2} and implement the minimax algorithm with a deep learning approach. The dataset CelebA has 202599 face images among which 13193 (6.5%) have eyeglass. The objective is eyeglass detection. We use a single hidden layer neural network to compare the two different methods.
We randomly choose 20000 pictures as the training set among which 5% have eyeglass labels. For the traditional method, the 20000 pictures are used as a whole. For the minimax method, we separate the 20000 pictures into 20 groups. Only 1 group contains eyeglass pictures while the other 19 groups do not contain eyeglass pictures. In this way, the whole minibatch is not i.i.d. while each subgroup is expected to be i.i.d.
The traditional method uses the following loss
The minimax method uses the maximal group loss
Here, σ is an activation function in deep learning such as the sigmoid function
We perform the two methods for 100 iterations. We see from Fig. 3 that the minimax method converges much faster than the traditional method. Figure 4 also shows that the minimax method performs better than the traditional method in accuracy. (Suppose the total number of the test set is n, and m of them are classified correctly. Then the accuracy is defined to be m/n.)
The average accuracy for the minimax method is 74.52% while the traditional method is 41.78%. Thus, in the deep learning approach with a single layer, the minimax method helps to speed up convergence on unbalanced training data and improves accuracy as well. We also expect improvement with the multilayer deep learning approach.
Conclusion
In this paper, we consider a class of nonlinear regression problems without the assumption of being independent and identically distributed. We propose a correspondent minimax problem for nonlinear regression and give a numerical algorithm. Such an algorithm can be applied in regression and machine learning problems, and yields better results than least squares and machine learning methods.
Appendix
Proof of Theorem 1
Consider the Lagrange function
It is easy to verify that problem (6) −(7) is equivalent to the following minimax problem.
By the strong duality theorem (see, e.g., (Boyd and Vandenberghe 2004)),
Set e=(1,1,⋯,1)^{T}, the above problem is equivalent to
Note that
If 1−λ^{T}e≠0, then the above is −∞. Thus, we must have 1−λ^{T}e=0 when the maximum is attained. The problem is converted to
The inner minimization problem has the solution d=−G^{T}λ and the above problem is reduced to
Proof of Theorem 2
Denote u=u_{k},d=d_{k}. For 0<t<1,
Since d is the solution of problem (5), we have that
Therefore,
For t>0 small enough, we have that
Proof of Theorem 3
Denote u=u_{k}. Then, d_{k}=0 means that ∀d,
Denote
Define
Then (see Demyanov and Malozemov (1977))
When ∥d∥ is small enough, we have that
In view of (11), we have that for ∥d∥ small enough,
For any \(d_{1}\in \mathbb {R}^{n}\), by taking d=rd_{1} with sufficient small r>0, we have that
Let r→0+,
Thus, we fulfill the proof by combining with (12).
Availability of data and materials
Please contact author for data requests.
Notes
 1.
For the definition and property of Moore −Penrose inverse, see (BenIsrael and Greville 2003).
 2.
Abbreviations
 i.i.d.:

Independent and identically distributed
 MAE:

Mean absolute error
 MSE:

Mean squared error
References
BenIsrael, A., Greville, T. N. E.: Generalized inverses: Theory and applications (2nd ed.)Springer, New York (2003).
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 3, 1–122 (2010).
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press (2004). https://doi.org/10.1017/cbo9780511804441.005.
Demyanov, V. F., Malozemov, V. N.: Introduction to Minimax. Wiley, New York (1977).
Jin, H., Peng, S.: Optimal Unbiased Estimation for Maximal Distribution (2016). https://arxiv.org/abs/1611.07994.
Kellogg, R. B.: Nonlinear alternating direction algorithm. Math. Comp. 23, 23–38 (1969).
Kendall, M. G., Stuart, A.: The Advanced Theory of Statistics, Volume 3: Design and Analysis, and TimeSeries (2nd ed.)Griffin, London (1968).
Kiwiel, K. C.: A Direct Method of Linearization for Continuous Minimax Problems. J. Optim. Theory Appl. 55, 271–287 (1987).
Klessig, R., Polak, E.: An Adaptive Precision Gradient Method for Optimal Control. SIAM J. Control. 11, 80–93 (1973).
Legendre, A. M.: Nouvelles methodes pour la determination des orbites des cometes. F. Didot, Paris (1805).
Lin, L., Shi, Y., Wang, X., Yang, S.: ksample upper expectation linear regressionModeling, identifiability, estimation and prediction. J. Stat. Plan. Infer. 170, 15–26 (2016).
Lin, L., Dong, P., Song, Y., Zhu, L.: Upper Expectation Parametric Regression. Stat. Sin. 27, 1265–1280 (2017a).
Lin, L., Liu, Y. X., Lin, C.: Minimaxrisk and minimeanrisk inferences for a partially piecewise regression. Statistics. 51, 745–765 (2017b).
Nocedal, J., Wright, S. J.: Numerical Optimization. Second Edition. Springer, New York (2006).
Panin, V. M.: Linearization Method for Continuous Minmax Problems. Kibernetika. 2, 75–78 (1981).
Peng, S.: Nonlinear expectations and nonlinear Markov chains. Chin. Ann. Math.26B(2), 159–184 (2005).
Seber, G. A. F., Wild, C. J.: Nonlinear Regression. Wiley, New York (1989).
Acknowledgments
The authors would like to thank Professor Shige Peng for useful discussions. We especially thank Xuli Shen for performing the experiment in the machine learning case.
Funding
This paper is partially supported by Smale Institute.
Author information
Affiliations
Contributions
MX puts forward the main idea and the algorithm. QX proves the convergence of the algorithm and collects the results. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Xu, Q., Xuan, X. Nonlinear regression without i.i.d. assumption. Probab Uncertain Quant Risk 4, 8 (2019). https://doi.org/10.1186/s4154601900426
Received:
Accepted:
Published:
Keywords
 Nonlinear regression
 Minimax
 Independent
 Identically distributed
 Least squares
 Machine learning
 Quadratic programming