Hi, thanks a lot for sharing your knowledge. I once read an article comparing the performance of SVMs, boosting, and tree algorithms. I was not keen to note the author of the publication but the author claimed boosting algorithms push the predicted probabilities of a classification problem towards zero and one. The author also mentioned how SVMs and tree-based algorithms behave. I have been looking for the article for almost 2 years now, but I cannot find it. What is your opinion about this topic? Is there a resource you could recommend on the subject?
Yes, gradient Boosting pushes the predicted probabilities of a classification towards zero and one. I will demonstrate and explain the mathematics behind this in parts 3 and 4 of this series of videos on Gradient Boosting. My recommendation is, if you haven’t already, you should subscribe to my youtube channel (or to this blog, but the youtube channel is preferred) and you’ll learn everything you need to know about Gradient Boost in the next few weeks (the videos on gradient boost will come out once a week for the next three weeks).
Hi Josh…. Keep up the great work … Soon your channel will be the go to channel for ML learning….
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.