My son gets up from the audience and climbs onto the stage (half crawling, half climbing the stairs). The unscripted and unrehearsed entry surprised the performers as much as the audience. Before anyone realises what is happening, my son runs towards the watermelon, pieces of which were placed as props on the stage to signify the onset of the summer season. This is not to say that he does not miss his parents. He misses us as much as he misses watermelon, if not more. How do I know? Just 30 minutes before sabotaging the stage, when he was supposed to perform his dance steps in the group dance, he started sobbing “Amma” on stage while others repeated the diligently rehearsed steps.
They are lovely imperfections. The imperfections are beautiful and desirable. They are a feature, not a bug. You cannot train the kids so much for that single dance program that it becomes difficult for them to unlearn and adapt to unseen situations during the dance program. For example, if the power supply is disrupted to the stage for ten seconds, we would like the performers to resume their steps once the power is back. In the context of machine learning, learning too much and precisely what MUST be done is called overfitting the model. An over-fitted model does not adapt well to unseen situations. The teaching must apply to many unseen scenarios, so the learning must be approximate.
The difference between rule-based programming (traditional programming, if I can call it) and ML programming is precisely this. In rule-based programming, the inputs are given to the program to produce an output. In ML programming, inputs and outputs are given to the program (ML algorithm) to create a model. In very simplistic terms, an ML model is the “approximate relationship” between the inputs (age, educational qualification, designation, city) and the output (annual salary). The ML algorithm can arrive at this approximate relationship because I provided thousands of examples of input/output combinations from history. Unsurprisingly, the ML model building is called “learning by examples”. Once this approximate relationship is learnt, don’t you think I can give you a prediction for any combination of inputs you give me? Of course, as long as I hold this relationship definition in my right hand and your inputs in my left hand, I can provide the prediction.
A rule-based system is always a hundred per cent accurate. There is no learning involved. Given a set of input values, we know the output. There is no approximation or probabilistic nature to that. However, such rule-based systems can’t scale. We cannot use the system to output on a combination of input values it has not seen. Hence, either the system performs perfectly or does not perform at all. That’s the trade-off between a machine-learning solution and a rule-based solution. A rule-based solution is the way out in problems where probability is not invited.
You guessed it right. We can build a solution with a combination of rule-based and machine-learning components. We can place the rules before or after the machine learning component. In the former scenario, we trigger predictions using models depending on the conditions they satisfy. In the latter scenario, we output the ML predictions and process them through conditions before displaying them to the user.
Children’s spontaneous actions, such as the watermelon stage invasion and the tearful “Amma” moment, embody the beauty of imperfection. Like well-designed machine learning models, children shouldn’t be overfitted to perform perfectly but trained to adapt to unexpected situations. Their unscripted moments remind us that the most authentic performances balance structure with flexibility and rules with spontaneity. In parenting and performance, embracing these imperfections creates resilience and genuine expression that no perfectly rehearsed routine could match.
Disclaimer
Views expressed above are the author's own.
Top Comment
{{A_D_N}}
{{C_D}}
{{{short}}} {{#more}} {{{long}}}... Read More {{/more}}
{{/totalcount}} {{^totalcount}}Start a Conversation