Close Menu
The AI Book
    Facebook X (Twitter) Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook X (Twitter) Instagram
    The AI Book
    AI Media Processing

    Achieving XGBoost Level Performance with CART Interpretation and Speed ​​- Berkeley Artificial Intelligence Research Blog

    11 May 2023No Comments4 Mins Read

    [ad_1]




    Figs (quick interpretation, greedy sums): A method of building interpretable models by simultaneously growing an ensemble of decision trees in competition with each other.

    Recent advances in machine learning have led to increasingly complex predictive models, often at the cost of interpretability. We often need interpretation, especially in high-stakes applications such as clinical decision-making; Interpretable models can help with all kinds of things like identifying errors, leveraging domain knowledge, and making fast predictions.

    In this blog post, we’ll cover FIGs, a new method of installation interpretable model which takes the form of a sum of trees. Real-world experiments and theoretical results demonstrate that FIGS can effectively fit a wide range of data, achieving state-of-the-art performance in several parameters, all without sacrificing interpretability.

    How does FIGS work?

    Intuitively, FIGS works by extending CART, a typical greedy algorithm for growing decision trees. Bowl of trees at the same time (see Fig. 1). At each iteration, FIGS can grow any existing tree that has already started or start a new tree; It greedily selects the rule that most minimizes the total unexplained variance (or an alternative splitting criterion). To keep the trees in sync with each other, each tree is prepared for prediction the waste After summing all other remaining tree predictions (see paper for details).

    FIGS is intuitively similar to ensemble approaches such as gradient boosting/random forest, but importantly, because all trees are grown to compete with each other, the model can better fit the underlying structure of the data. The number of trees and the size/shape of each tree appear automatically from the data rather than being specified manually.



    Fig. 1. A high-level intuition about how the FIGs model fits.

    Using an example FIGS

    Using FIGS is very easy. It is easily installed via the imodels package (pip install imodels) and then can be used just like standard scikit-learn models: just import a classifier or regressor and use fit and predict methods. Here is a complete example of its use in a clinical database of samples targeting cervical spine injury risk (CSI).

    from imodels import FIGSClassifier, get_clean_dataset
    from sklearn.model_selection import train_test_split
    
    # prepare data (in this a sample clinical dataset)
    X, y, feat_names = get_clean_dataset('csi_pecarn_pred')
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.33, random_state=42)
    
    # fit the model
    model = FIGSClassifier(max_rules=4)  # initialize a model
    model.fit(X_train, y_train)   # fit model
    preds = model.predict(X_test) # discrete predictions: shape is (n_test, 1)
    preds_proba = model.predict_proba(X_test) # predicted probabilities: shape is (n_test, n_classes)
    
    # visualize the model
    model.plot(feature_names=feat_names, filename='out.svg', dpi=300)
    

    This results in a simple model – it contains only 4 splits (since we specified that the model should have no more than 4 splits (max_rules=4). Forecasts are made by sampling all trees and summary Risk-adjusted values ​​obtained from the resulting leaves of each tree. This model is highly interpretable as the clinician can now (i) easily make predictions using the 4 relevant features and (ii) test the model to ensure it is consistent with their domain expertise. Note that this model is for illustrative purposes only and achieves ~84% accuracy.



    Fig. 2. A simple model for predicting the risk of cervical spine injury explored by FIGS.

    If we want a more flexible model, we can also remove the restriction on the number of rules (changing the code model = FIGSClassifier()), resulting in a larger model (see Fig. 3). Note that the number of trees and their balancing depends on the data structure – only the total number of rules can be specified.



    Fig. 3. A slightly larger model learned by FIGS to predict cervical spine injury risk.

    How well does FIGS work?

    In many cases where interpretation is desired, such as clinical decision rule modeling, FIGS can achieve state-of-the-art efficiency. For example, fig. 4 shows various data sets where FIGS achieves excellent performance, especially when limited to using only very few total splits.



    Fig. 4. FIGs predicts well with very small splitting.

    Why does FIGS work so well?

    FIGS are motivated by the observation that single decision trees often have splits that are repeated in different branches, which can happen when the data is additively structured. Having multiple trees helps avoid this by splitting the plug-in components into separate trees.

    conclusion

    Overall, interpretable modeling offers an alternative to common black-box modeling and in many cases can offer massive improvements in efficiency and transparency without sacrificing performance.


    This post is based on two works: FIGS and G-FIGS – all code is available via the imodels package. It is a collaboration with Keyan Nasser, Abhinet Agarwal, James Duncan, Omer Ronen and Aaron Kornblit.

    [ad_2]

    Source link

    Previous ArticleCan you build great language models like ChatGPT at half the price?
    Next Article VPN benefits for content creators
    The AI Book

    Related Posts

    AI Media Processing

    A new set of Arctic images will help artificial intelligence research MIT News

    25 July 2023
    AI Media Processing

    Analyzing rodent infestations using the geospatial capabilities of Amazon SageMaker

    24 July 2023
    AI Media Processing

    Using knowledge of social context for responsible use of artificial intelligence – Google Research Blog

    23 July 2023
    Add A Comment
    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.