At the heart of many approaches to machine learning lies the creation of a model from a set of training data. Once a model is created, it can be used over and over again. However, the world isn't stationary, and often the model needs to change as new data becomes available. Simply re-running the model creation step can be slow and costly. Incremental learning addresses this issue, making it possible to learn from streams of data incrementally to react to change faster. As a bonus, the compute and memory requirements are lower and predictable. Our practical experience with River continues to be positive. Vowpal Wabbit, which can be an alternative, has a much steeper learning curve, and the Scikit-like API offered by River makes River more accessible to data scientists.
At the heart of many approaches to machine learning lies the creation of a model from a set of training data. Once a model is created, it can be used over and over again. However, the world isn't stationary, and often the model needs to change as new data becomes available. Simply re-running the model creation step can be slow and costly. Incremental learning addresses this issue, making it possible to learn from streams of data incrementally to react to change faster. As a bonus the compute and memory requirements are lower and predictable. In our implementations we've had good experience with the River framework, but so far we've added checks, sometimes manual, after updates to the model.