As the discipline of machine learning has grown, articles based on the domain have flooded scholars. Communicating ideas from such a dynamic field as AI/ML is also an art. And just like art, writing ML-based research papers also has a set of rules. Researcher at Carnegie Mellon, Zachary Lipton, talked about the serious repercussions that writing poor quality research papers has on ML and AI. “Sloppy writing poses an existential threat to AI ethics research. There is no clear thinking without clear writing. Core ML papers can survive sloppy prose by expressing ideas clearly in When important ideas are qualitative, bad writing is a death sentence,” says Lipton.
Excessive use of technical jargon
Abhi Dubey, a researcher from MetaAI’s FAIR, or Facebook’s AI research group, discussed most of the issues he encountered as a peer reviewer. The usual tendency with writing introductions to research papers, in general, is to fill it with a lot of jargon that is mostly superfluous. Sentences with unwanted verbiage like “Deep learning and CNNs have revolutionized computer vision over the last decade” are totally unnecessary.
Dubey’s advice to researchers is to get straight to their point.
Economist Paul Romer also talked about the idea that adding more mathematical equations to a research paper is proof of being technically sound, but they often end up confusing readers and leaving a bigger gap. instead of filling it. “Like mathematical theory, mathematics uses a mix of words and symbols, but instead of making strong connections, it leaves a lot of room for slippage between natural language statements and formal language,” Romer said.
Clear analysis of the literature
He also noted that sometimes researchers treat the “Related Works” section of an article as just a checklist. Ideally, an article’s literature review should mention only the work most relevant to the article’s topic, and then demonstrate why it improves on previous work done in that regard. Dubey mentions that he has noticed that researchers want to mention as many related works as possible, even those that are only tangentially related. Related work is an important section of research as it describes important work done around it in the past. Therefore, it should be treated selectively and with respect.
Excessive quotes and ratings
Researchers have also come to believe that the more jargon their paper introduces, the more technical prowess it signals. This, says Dubey, is not true and rather serves to make the newspaper difficult to read. He observed articles, on average, introducing five new acronyms in a search, with a new acronym for each module. He suggests that researchers limit themselves to one new acronym per article.
Similarly, he also notes that most of the submissions he saw had excessive notations with complex symbols that were not necessary. Also in this regard, Dubey says that notations should be used consistently throughout the article and with some consideration. Simplicity was best when it came to writing notations. As is the norm, matrices can be written in bold and sets in calligraphy. He also advises against using too many footnotes.
Most ML articles are also littered with unintelligible quotes. The numerical citation style accepted in CVPR and CICC is difficult for readers to understand. Citing articles using the author’s name or repeating the algorithm is much more readable.
Zachary Lipton and Jacob Steinhardt presented a paper at ICML 2018: Debates in Machine Learning, entitled “Troubling Trends in Machine Learning Scholarship” on some harmful patterns that have recently emerged in studies. The document notes that some studies mix speculation and explanation. On the contrary, the research also encourages researchers to be explicit about being uncertain when the experience is uncertain.
At Yoshua Bengio’s paper, ‘Practical Recommendations for Gradient-Based Training of Deep Architectures. In Neural networks: Tricks of the trade,” he mentions, “Although such recommendations come… from years of experimentation and, to some extent, from mathematical justifications, they must be questioned. They are a good starting point. . . but very often have not been formally validated, leaving open many questions that can be answered either by theoretical analysis or by solid comparative experimental work”. This expressly shows the author’s doubts about the methods and possible restrictions.
Need for reproducibility
In a study Titled “Ten Ways to Fool the Masses with Machine Learning,” researchers Fayyaz Minhas, Amina Asif, and Asa Ben-Hur discuss the issues of conducting and reporting ML experiments. Usually, the recurring problem with scientific articles is that there is a reproducibility crisis. In a survey conducted by Joelle Pineau at ICML, most researchers have stated that there is a marked need for reproducibility with ML research papers.
Although the community understands that ML software needs to be more freely available, many published studies do not mention the code or the software. If a study mentions all the details of its execution, including hyperparameter settings and preprocessing, it can be reproduced by other researchers without having to start from scratch. Even if a method works well with one dataset, that does not necessarily indicate that it can be replicated equally well with other datasets.
Source: research paper
Dataset and hyperparameters
Moreover, the data sets mainly depend on the phenomenon that the experiment aims to study.
Some studies use proxy datasets, which is not advised.
If the existing datasets are not of good quality, then it is more advantageous for the researcher to create the datasets himself.
Dubey refers to the same point but for a different reason. When a study presents a new result, the article should highlight how it advances previous work in the field and how to refine the algorithm, i.e. select the hyperparameter. The study should also clarify where and if the algorithm fails. Study shortcomings are a normal part of any research paper and cannot be omitted, even more so when it comes to scientific papers. The researcher should also be honest enough to mention their inspirations for the article.
Being the most important part of the article, the conclusion should be insightful in itself and not repetitive of the summary. The conclusion should mention what has been done in the specific area, how studies in the area can progress, and what the obstacles may be.