No, it has not. We are nowhere close to even the most powerful supercomputers passing a Turing test. There has not ever been an AI that experts agree passes the Turing test as of writing.
If it exists, please link me to it so I can buy it.
In order to pass a Turing test, the computer must be interrogated by a human. There must be a back-and-forth dialog between the human and the computer, whereby the human challenges the computer by asking it various questions. If a computer spits out information that seems human, but cannot respond to interrogation, it cannot, by definition, pass the Turing test.
This is not a matter of opinion. It is the definition of the Turing test as defined by its creator, Alan Turing.
Search the original paper for occurrences of the term "interrogator". I count 25 occurrences.
Moore's Law was a claim to the effect of: the number of transistors we can pack into a CPU doubles every two years. With our current algorithms, that is not a sufficient rate to produce AI capable of passing the Turing test in the near term.
This is an important and indeed inevitable topic of discussion, but there are shorter-term issues that we need to deal with first. The singularity is a ways off, but AI still has a lot of potential to cause harm long before it reaches that point. Those are the issues worth discussing in this thread--not the potential for AI to blend in with humans on a forum.
For example, there's no disputing that large social media networks have a tendency to be flooded with posts by bots. These aren't intelligent bots, though; they're essentially just screaming into the void. They don't
need to be intelligent: they're not being interrogated--or, if they are, those interrogations are not generally being viewed by their target demographic anyway.
Since you're a fan of YouTube videos,
here's a video published by Tom Scott, who's a bit more qualified than Joe Scott to be vetting technical content. It involves an ML-generated impersonation of himself created by Jordan Harrod, an AI/ML professional. It could definitely fool you into thinking it's human--but it does not pass the Turing test because it cannot successfully impersonate a human during an interrogation. Neither Jordan nor Tom make any attempt to claim it can pass the Turing test: it doesn't
need to pass the Turing test to be dangerous.
There are aspects of this that are appropriate for discussion on NamePros.
Bots are an imminent threat on both NamePros and other websites--but they're not bots capable of passing the Turing test. Again, they don't
need to be able to pass the Turing test to cause harm.
Prior to discussing it anywhere, you should
read the original paper describing the Turing test. It's neither long nor dense, and it provides the industry-standard assessment to determine when we have reached the singularity.
As for where you should further discuss this topic once you've researched it: probably not on the internet! There's an inherent flaw in debating whether the internet is being overrun by bots on the internet itself. If that's an argument you want to make, finding local academic groups and meetups is probably a better way to approach it--although if you believe bots can infiltrate those, too, then you're venturing more into the realm of philosophy.