I've been dealing with a variety of 'beta versions' of (some, predictable) downsides of "collective intelligence" for years, keep reading. What I currently imagine to be the greatest danger is the human nature to accept 'appeals to authority'... that is, after a quick consultation: "how
wrong could ChatGPT
really be?", especially if it confirms a pre-existing belief.
I've had plenty of circumstances in my more recent professional career that are more towards the end of 'cut-and-paste from an internet source' of the information spectrum. For example:
- I had a colleague who didn't like a well-established, particular, and specific use of a technical term and dropped 10 pages of google results on my desk to explain why the term was being misused. My (literal, hah!) response: "take it to the Microsoft forums, it is their term".
- Similar to above, I had a colleague (in medical device manufacturing) that came from pharmaceutical manufacturing and would not use the medical device industry definitions of IQ/OQ/PQ. This particular colleague kept pointing to the wrong industry's references while ignoring the correct one.
- Pre-internet, I had to deal with a number of electrical designs that were cobbled together from the sample circuits in the backs of datasheets that failed to implement basic (and necessary) circuit design elements. "But that isn't on the datasheet?!". See also picking the wrong tranzorbs because not knowing the difference between an AC and DC rating.
- Similar to point 3, more recently (internet era) I had to deal with mechanical engineers who would constantly specify materials based only on what was on the drawing from which they were copying drawing elements, and then would use google search to rationalize their choices.