[1] 
J. Garcke and M. Griebel.
Classification with sparse grids using simplicial basis functions.
Intelligent Data Analysis, 6(6):483502, 2002.
(shortened version appeared in KDD 2001, Proc. of the Seventh ACM
SIGKDD, F. Provost and R. Srikant (eds.), pages 8796, ACM, 2001). [ bib  http  .ps.gz 1  .pdf 1 ] Recently we presented a new approach [?] to the classification problem arising in data mining. It is based on the regularization network approach but in contrast to other methods, which employ ansatz functions associated to data points, we use a grid in the usually highdimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [?]. Thus, only O(h_{n}^{1} n^{d1}) instead of O(h_{n}^{d}) grid points and unknowns are involved. Here d denotes the dimension of the feature space and h_{n} = 2^{n} gives the mesh size. We use the sparse grid combination technique [?] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. The method computes a nonlinear classifier but scales only linearly with the number of data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high. In contrast to our former work, where dlinear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point. We further extend the method to socalled anisotropic sparse grids, where now different apriori chosen mesh sizes can be used for the discretization of each attribute. This can improve the run time of the method and the approximation results in the case of data sets with different importance of the attributes. We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 14 dimensions. We show that our new method achieves correctness rates which are competitive to those of the best existing methods.
