There have been many articles covering how the entire human race is going to be replaced by AI and machine learning. That I don’t know. However, machine learning is in many ways simply mimicking human learning, and I believe we can apply effective machine learning techniques to improve our own learning and education.
- Be open to improving your decision process to face new challenges based on new data as opposed to outdated data: For an AI model to learn new knowledge or adapt to new problems, it has to have new data. Opportunities will be missed if we don’t keep up to date with current knowledge. For example, many people in Hong Kong doubted that the economy of China would take over one day, even though they were adjacent to China and were emigrating to foreign countries. The economy in China was significantly worse than the rest of the world when I was born in Hong Kong in the 1990s. Nonetheless, I saw many cities in China have gone from Detroit to world-class cities in less than twenty years. Now, if I say I want to work in China people will think of it as a good choice, but if I say Ethiopia people will probably think I am crazy. But hey, let’s look at the following simple plot with GDP growth rate pulled from the world bank: Ethiopia has been having a steady growth rate of 10% for ~10 years. If China could go from dirt poor to having a vibrant economy now, why couldn’t Ethiopia do it?
- In problem-solving, try different solutions before focusing. A common algorithm for finding the optimal solution when there is no certain (closed form) solution in ML is simulated annealing, which was inspired by simulating annealing in molding glass. Glass annealing is a process that starts with liquidish glass, which is slowly molded into hard and specific shapes. In each step, the possible shapes become more and more dictated by the shape of the previous glass. And we often get get stuck in the old way of thinking, “If all you have is a hammer, everything looks like a nail.” And to get to the masterpiece in your mind, you need to be open to start over (random restart). Of course, how often you need to restart is probably worth another post.
- Shrink the problem by finding recurrent patterns: Convoluted and distributed layers from deep learning try to capture recurrent patterns in the data that are important towards the problem. For example, the most effective way to learn about a company is through a concise executive summary. While it is good to generalize, also take care to avoid inaccurate overgeneralization.
- Remember what is important, but don’t get stuck in the past: Long Short-Term Memory made recurrent neural nets more effective by remembering important parts of the data. Life is a journey. We cannot fully predict tomorrow, but we can learn from the past. The most important thing is to focus on the important parts that will impact the future, instead of getting stuck on trivial things. “Always and forever” is a beautiful idea, but our world is always changing in profound ways.
- Don’t get discouraged by making mistakes: Deep learning may be state-of-the-art, but do you know that it fails a lot when it first starts training? Following a recipe is like a rule-based algorithm may yield the simplest solution in the short term, but they fail to adapt and get phased out over time. So which approach are you going to use?
- Try starting with the right information in problem-solving. Pre-training and transfer learning in ML can save a lot of training time and reduce data requirements. Trying to figure out everything by yourself can take a lot a time. It is much better if find a mentor or start with the right knowledge so that you don’t have to learn everything from scratch every time.
- Focus on the important variables. Regularization in model fitting is commonly adopted to avoid overfitting. Don’t let trivial negative things that happen in life ruin you. Focus your decision making based on your most memorable experiences that will maximize your future outcomes.
- Incomplete or biased data can cause a bias in decision making that yields poor results. If the data trained in the ML/AI model is incomplete or biased, so will the decision that the model makes. Try to get multiple perspectives before diving into a conclusion! Also, you might want to read about the MECE (Mutually exclusive and collectively exhaustive) principle from “The McKinsey Way” by Ethan M. Rasiel. “Factfulness” by Hans Rosling is also a great book on refining our knowledge about the world.
Data is part of every single useful machine learning equation. And data is simply the truth observed. Thus, in a way, what machine learning teaches us the most is that we should always be respectful of the truth in decision making.
Please leave comments and thoughts!
Author: Brian Tsui, Bioinformatics, Ph.D. Candidate, UC San Diego.
Edited by: Tiffany Hwu, Cognitive Science, Ph.D. Candidate, UC Irvine.
P.S.: If you are interested in how the simple plot was generated, click here.