Abstract
Classical natural language processing endeavored to understand the language of native speakers. When this proved to lie beyond the horizon, a scaled-down version settled for text analysis and processing but retained the old name and acronym. But text ≠ language. Any combination of signs and symbols qualifies as text. Language presupposes meaning, which is what connects it to real life. Failing to distinguish between the two results in confusing humanoids (machines thinking like humans) with machinoids (humans thinking like machines). As scientific English (SciEng) became the lingua franca of science, it has acquired all the traits of a machine language: reduced vocabulary, where fewer and fewer words have taken on more and more meanings; prescribed use of pronouns; depersonalized rigid syntactic forms and rules of composition. Compliance with SciEng standards can be automatically verified, which means that Sci Eng can be automatically imitated, what is referred to as AI writing (ChatGPT). The article discusses an attempt to automatically correct deviations from the rules by what is touted as AI editing.