Decision trees are roughly divided into two types according to the type of objective variable Y. Qualitative data is called a "classification tree" and quantitative data is called a "regression tree".
As the name implies, regression trees can be used to predict the value of Y when any X is entered, similar to regression analysis .
However, in the regression tree, X is basically treated as qualitative data, so the numerical accuracy is not very good. If you try to improve the accuracy of numerical values, it tends to be a complicated model that you do not understand.
I think that causal inference and data mining are more realistic than predictions in using regression trees . Rather than regression analysis, it is an image of innumerable analysis of variance . It is useful as a tool for appropriately grouping and analyzing Y.
The regression tree has a binary tree, but it seems that there is no N-ary tree .
If you want to use the goodness of N-shinki , there is a way to cluster Y in one dimension and make it qualitative data.
Examples of R is in the page, Decision tree by R .
NEXT N-try tree