Join us
@mkolos ă» Jul 12,2022 ă» 6 min read ă» 345 views
Calculator vs Human
Who is more intelligent: a calculator or a human? It depends on what operations one needs to perform.
So, the advantage of a calculator is to be able to do tons of mechanical (non-intelligent) operations quickly such as addition, multiplication, subtraction, if-then-else, etc.
ML has very the same strength: it can mechanically learn many âatomic conclusionsâ (e.g. if X, then Y), combine them into bigger ones (e.g. if Y and Z, then A), and eventually output some âfinal conclusionâ, e.g. the object O belongs to the class C. This is known as learning from training data. A computer where a ML algorithm runs can test thousands or even millions of hypotheses and ways to combine them in order to find ones that produce a final conclusion closer to the truth.
Any ML algorithm is just a chain of mechanical calculations
Letâs consider a simple example: we need to classify news titles into categories (Sports, Business, Health, Science, etc.). Letâs apply a basic ML algorithm that simply finds typical words for each category. To do that, the algorithm counts how often a given word appears in Sports, Business, Health, and the other categories. The typical words are the words that occur mostly in one category, but rarely in the others. Thus, a typical word would be a signal to classify a given title with the category of that word. A human could do very the same âcountingâ (a statistics-based approach) on their own, but using a computer for such mechanical work is, of course, way more convenient.
Theoretically, it applies to absolutely any ML algorithm: a human could manually calculate even thousands of parameters of a deep neural network for visual object recognition (though that would not utilize humanâs strengths â a computer would do that work disparately faster and precisely whereas a human doesnât need any interim calculation to perceive an image). Any more complex algorithm is a sort of statistical approach too, but a human can no longer interpret what exactly such an algorithm is counting, i.e. it is more of a black box that probes tons of hypotheses in order to approximate the truth. So, a more complex algorithm is just a longer chain of calculations.
Donât better ML algorithms matter?
They do because they suggest a better approximation of the real worldâs truth. For example, just a set of words in the title might not be enough to classify some text; in particular, word order matters too, and negation inverses the meaning of a text.
With plenty of standard libraries for ML, trying all existing algorithms is basically effortless. Nevertheless, donât brute force the problem by blindly trying all ML algorithms. Instead, try to understand the âessenceâ of the problem by looking into training examples and manually finding some insights first (e.g. notice that each news category has some typical words). In other words, imagine in your mind first what a ML algorithm is supposed to find (e.g. calculate occurrences of each word in each category, which will show typical words). Then, your intelligence in finding relations and causations will be combined with a ML algorithmâs âsuperpowerâ â finding the most prominent relations and most credible causations out of a vast number of hypotheses.
When designing a ML algorithm, solve a problem only with mechanical calculations
A ML algorithm is a humanâs approach to a problem that was converted into the form of mechanical calculations. If a human understands the language of the news titles to be classified, the problem of classifying a title is trivial. But a machine cannot easily understand the meaning of a title and a category like a human, and cannot group titles by categories. For a machine, the titles are just a series of digits. A machine can do only some mechanical operations, e.g. split a title into words, compare two words and/or count the number of occurrences of a word. In other words, a machine can do operations like a human who doesnât understand that language of the titles. How would such a human solve this problem?⊠Right, the human would try to find some correlation between a titleâs category and the words in it, i.e. would notice that some words appear in a category more often than others. As you can see, we convert a humanâs approach into mechanical calculations. Solving a problem with the operations that a machine can perform can be converted into a ML algorithm.
The human could get more insights, e.g.
This is how more advanced ML algorithms (e.g. N-gram model, LSTM) can be suggested, which are closer to how a human understands a text.
Assessing a training datasetâs size
Understanding what your ML algorithm is supposed to find would also help to assess how much data you need. In the example above, you may need to ask yourself the following questions (all numbers mentioned below are approximate):
So, we expect to find about 10 thousands typical words per category. Say, each title would have two-three words that are typical for a category. So, we need about 3â5 thousands training examples per category.
A ML algorithm doesnât really get rid of noise in training data
A ML algorithm doesnât really get rid of noise in training data, but replicates everything found there proportionally to a number of occurrences of a given mistake. For example, if the word âsoccerâ (a typical word for the Sports category) occurs in Fashion a number of times, the word may mislead an algorithm to classify a title with âsoccerâ as a fashion title.
We may assume that training data has only a small number of mistakes like this, which is normally a valid assumption that good examples prevail. Then, one can simply ignore the counts less than some threshold and therefore ignore mistaken examples. However, note that ignoring small counts may also prune some true but rare occurrences that could help in classification. One cannot simply cut off only mistaken examples â some good examples will be cut off too. The more bad examples vanish, the more good examples do as well. Likely, the area to be cut off â the area of rare occurrences â includes more bad examples than good ones, which is a desired result.
Thus, ignoring rare occurrences sacrifices coverage (some titles with those true rare words will not be classified) in favor of precision (the mistaken examples, which are supposed to have small occurrences and are cut off, will not affect classification). This is why a larger dataset may not help to increase precision: if mistaken examples are multiplied too, a ML algorithm will make conclusions based on these examples too.
A ML algorithm, like a calculator, doesnât make mistakes in technical calculations, but such an algorithm cannot fix mistakes in training data, like a calculator cannot fix wrong numbers entered by a human.
Summary
Join other developers and claim your FAUN account now!
Software Engineer, Google
@mkolosInfluence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.