Discussion:
Disguised AI bots in social platforms
(too old to reply)
Anton Shepelev
2024-03-27 09:57:36 UTC
Permalink
Hello, all.

No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?

I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
--
() ascii ribbon campaign -- against html e-mail
/\ www.asciiribbon.org -- against proprietary attachments
D
2024-03-27 11:31:04 UTC
Permalink
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
"They're here already! You're next! You're next!"
--Dr. Miles Bennell, Invasion of the Body Snatchers

ironically, Kevin McCarthy's parents both died of actual influenza
during the "Spanish Flu" pandemic of 1918 (Roy, aged 38; Tess, 29)
Rich
2024-03-27 14:11:33 UTC
Permalink
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM, than
another has appeared on the horison. Since AI in general and LLMs in
particular are developing at break-neck speed, social platforms may
soon be infested by intelligent bots that will be rather hard to
distinguish from humans (e.g. when the LLM is uncensored). Will it
be the end of online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Removing the profit motive from the spammers. So long as gullible
users buy the wares offered, or hand money over to the scams, the
spammers have a profit motive to continue to work around all attempts
to thwart them.
Post by Anton Shepelev
I for one have only one idea: a heterarchical redundant mutual cross
verification of users by each other via off-line meetings.
I.e., the pgp web-of-trust. It technically worked well. In reality it
did not live up to its true value due to the need for those "off-line"
meetings to truly make it workable.

So I see no reason to expect a new variant will fare any better.
Adrian
2024-03-27 14:46:07 UTC
Permalink
In message <uu19el$2sn32$***@dont-email.me>, Rich <***@example.invalid>
writes
Post by Rich
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM, than
another has appeared on the horison. Since AI in general and LLMs in
particular are developing at break-neck speed, social platforms may
soon be infested by intelligent bots that will be rather hard to
distinguish from humans (e.g. when the LLM is uncensored). Will it
be the end of online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Removing the profit motive from the spammers. So long as gullible
users buy the wares offered, or hand money over to the scams, the
spammers have a profit motive to continue to work around all attempts
to thwart them.
And what where the motive isn't directly financial, e,g, disinformation
?

Adrian
--
To Reply :
replace "bulleid" with "adrian" - all mail to bulleid is rejected
Sorry for the rigmarole, If I want spam, I'll go to the shops
Every time someone says "I don't believe in trolls", another one dies.
Rich
2024-03-27 17:14:58 UTC
Permalink
Post by Adrian
writes
Post by Rich
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM,
than another has appeared on the horison. Since AI in general and
LLMs in particular are developing at break-neck speed, social
platforms may soon be infested by intelligent bots that will be
rather hard to distinguish from humans (e.g. when the LLM is
uncensored). Will it be the end of online group-based
communication? Is there any hope of preventing or at least staving
off this new apocalypse?
Removing the profit motive from the spammers. So long as gullible
users buy the wares offered, or hand money over to the scams, the
spammers have a profit motive to continue to work around all attempts
to thwart them.
And what where the motive isn't directly financial, e,g, disinformation
?
There's almost always some ultimate financial motive behind even those
things that are "disinformation". Find that underlying motive and snip
it off and the incentives go away. The underlying financial motive can
be difficult to discern in some cases.

But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation' is
relatively small vs. the huge pile of clearly sales/scam spamming
occurring. So it would be helpful overall if those had their oxygen
cut off, because that leaves only the smaller set of kooks with their
disinformation to actively ignore.
Kerr-Mudd, John
2024-03-27 19:21:29 UTC
Permalink
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
Rich <***@example.invalid> wrote:

[]
Post by Rich
There's almost always some ultimate financial motive behind even those
things that are "disinformation". Find that underlying motive and snip
it off and the incentives go away. The underlying financial motive can
be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation' is
relatively small vs. the huge pile of clearly sales/scam spamming
occurring. So it would be helpful overall if those had their oxygen
cut off, because that leaves only the smaller set of kooks with their
disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
--
Bah, and indeed Humbug.
Rich
2024-03-27 19:42:32 UTC
Permalink
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
[]
Post by Rich
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.

Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay
entrenched far longer than one would like.
Kerr-Mudd, John
2024-03-28 09:24:50 UTC
Permalink
On Wed, 27 Mar 2024 19:42:32 -0000 (UTC)
Post by Rich
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
[]
Post by Rich
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay
entrenched far longer than one would like.
I was thinking specifically of the Russian attempts at misinformation
about Ukraine. This, ISTM, is more about some "Greater Russia" plan than
pure economics.
--
Bah, and indeed Humbug.
Rich
2024-03-28 14:31:48 UTC
Permalink
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 19:42:32 -0000 (UTC)
Post by Rich
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
[]
Post by Rich
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay
entrenched far longer than one would like.
I was thinking specifically of the Russian attempts at misinformation
about Ukraine. This, ISTM, is more about some "Greater Russia" plan than
pure economics.
However, a "Greater Russia" plan does bring more money to both the
Russian leaders (i.e. Putin and others) and the Russian Oligarchs that
support them. If "Russia" is "greater" then more money will flow into
the pockets of Putin and his allies, so there is still a financial
incentive at play.

This, however, is one of those financial incentives that is harder to
"cut off" without a lot of violence.
grinch
2024-03-30 06:19:10 UTC
Permalink
Post by Rich
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 19:42:32 -0000 (UTC)
Post by Rich
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
[]
Post by Rich
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying
motive and snip it off and the incentives go away. The
underlying financial motive can be difficult to discern in some
cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are
'disinformation' is relatively small vs. the huge pile of
clearly sales/scam spamming occurring. So it would be helpful
overall if those had their oxygen cut off, because that leaves
only the smaller set of kooks with their disinformation to
actively ignore.
But there are also political types and governments pushing their
own agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing
in, or keep their nice cushy job prospects open when they leave
their political seat.
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to
stay entrenched far longer than one would like.
I was thinking specifically of the Russian attempts at misinformation
about Ukraine. This, ISTM, is more about some "Greater Russia" plan
than pure economics.
However, a "Greater Russia" plan does bring more money to both the
Russian leaders (i.e. Putin and others) and the Russian Oligarchs
that support them. If "Russia" is "greater" then more money will flow
into the pockets of Putin and his allies, so there is still a
financial incentive at play.
This, however, is one of those financial incentives that is harder to
"cut off" without a lot of violence.
Oil and gas. Russia owns Europe when it comes to that.
grinch
2024-03-30 06:19:05 UTC
Permalink
Post by Rich
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
[]
Post by Rich
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Term limits and a two year hard ban from lobbying once exiting office.
Pass the insider trading ban for everyone in government service, no
exceptions.
Post by Rich
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay
entrenched far longer than one would like.
Cap government jobs at 20 years. Eliminate government hog trough pensions
where they get paid 130% of what they were making before retirement. Cap
government pensions at 80% max.
Rich
2024-03-30 16:41:03 UTC
Permalink
Post by grinch
Post by Rich
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
[]
Post by Rich
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Term limits and a two year hard ban from lobbying once exiting office.
Which presents you now with a "fox garding the henhouse" situation. In
most instances the governed would need those who most benefit from not
having these rules in place also be the ones to implement both rules.
And where a politicians benefits are at risk, one can be sure he/she
makes sure he/she has a way to keep those benefits (for example, opting
themselves out of the do-not-call list so many years ago)
Carlos E.R.
2024-04-08 13:19:21 UTC
Permalink
Post by grinch
Post by Rich
Post by Kerr-Mudd, John
On Wed, 27 Mar 2024 17:14:58 -0000 (UTC)
[]
Post by Rich
There's almost always some ultimate financial motive behind even
those things that are "disinformation". Find that underlying motive
and snip it off and the incentives go away. The underlying
financial motive can be difficult to discern in some cases.
But compared to the spammers with clear financial motives (either
direct sales or by scams) the percentage that are 'disinformation'
is relatively small vs. the huge pile of clearly sales/scam
spamming occurring. So it would be helpful overall if those had
their oxygen cut off, because that leaves only the smaller set of
kooks with their disinformation to actively ignore.
But there are also political types and governments pushing their own
agendas. (Propaganda).
There's also a 'financial' incentive there, in that said
government/political types want to keep either tax revenue flowing in,
or keep their nice cushy job prospects open when they leave their
political seat.
Term limits and a two year hard ban from lobbying once exiting office.
Pass the insider trading ban for everyone in government service, no
exceptions.
Post by Rich
Granted, it is by far much harder to snip off the finances there
(usually involves a revolt and regime change) so those do tend to stay
entrenched far longer than one would like.
Cap government jobs at 20 years. Eliminate government hog trough pensions
where they get paid 130% of what they were making before retirement. Cap
government pensions at 80% max.
Teachers and doctors are government employees here. Will you actually
harm them that way?
--
Cheers, Carlos.
Richard Kettlewell
2024-03-28 14:54:18 UTC
Permalink
Post by Rich
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus SPAM, than
another has appeared on the horison. Since AI in general and LLMs in
particular are developing at break-neck speed, social platforms may
soon be infested by intelligent bots that will be rather hard to
distinguish from humans (e.g. when the LLM is uncensored). Will it
be the end of online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Removing the profit motive from the spammers. So long as gullible
users buy the wares offered, or hand money over to the scams, the
spammers have a profit motive to continue to work around all attempts
to thwart them.
Easy to say, very hard to do...
Post by Rich
Post by Anton Shepelev
I for one have only one idea: a heterarchical redundant mutual cross
verification of users by each other via off-line meetings.
I.e., the pgp web-of-trust. It technically worked well. In reality
it did not live up to its true value due to the need for those
"off-line" meetings to truly make it workable.
So I see no reason to expect a new variant will fare any better.
The PGP implementation is pretty bad. Actually the in-person key
confirmation is one of the few features to have survived (generally in
more user-friendly form) into other designs.
--
https://www.greenend.org.uk/rjk/
Retro Guy
2024-03-27 14:31:25 UTC
Permalink
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Nearly 100% of nocem listings since google left is of computer generated
posts, but these posts started before 22 Feb.
Post by Anton Shepelev
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
That's a great way to meet a lot of fbi agents.
Mr Ön!on
2024-03-27 14:45:02 UTC
Permalink
Retro Guy <***@novabbs.org> wrote:
[...]
Post by Retro Guy
Post by Anton Shepelev
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
That's a great way to meet a lot of fbi agents.
That's OK if some of them are pretty or handsome
(according to one's taste).
--
\|/
(((Ï))) - Mr Ön!on

When we shake the ketchup bottle
At first none comes and then a lot'll.
D
2024-03-27 16:09:43 UTC
Permalink
Post by Retro Guy
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Nearly 100% of nocem listings since google left is of computer generated
posts, but these posts started before 22 Feb.
Post by Anton Shepelev
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
That's a great way to meet a lot of fbi agents.
it's a common tactic for psyops/troll farm agents to lure the unwary
into separation from whatever group they have infiltrated, to divide
and conquer, essentially to control the narrative ... cointelpro 101;
every active unmoderated usenet newsgroup is constantly under attack
by these mercenary fear merchants . . . those that live by the sword
http://duckduckgo.com/?q=stand+for+the+flag+kneel+for+the+cross+meme
grinch
2024-03-30 06:31:58 UTC
Permalink
Post by D
Post by Retro Guy
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
Nearly 100% of nocem listings since google left is of computer
generated posts, but these posts started before 22 Feb.
Post by Anton Shepelev
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
That's a great way to meet a lot of fbi agents.
it's a common tactic for psyops/troll farm agents to lure the unwary
into separation from whatever group they have infiltrated, to divide
and conquer, essentially to control the narrative ... cointelpro 101;
every active unmoderated usenet newsgroup is constantly under attack
by these mercenary fear merchants . . . those that live by the sword
http://duckduckgo.com/?q=stand+for+the+flag+kneel+for+the+cross+meme
Separation only works on wusses. Give what you get and make it hurt.
Moderation sucks and kills everything after a while.
Anton Shepelev
2024-04-07 00:39:31 UTC
Permalink
Post by Retro Guy
Nearly 100% of nocem listings since google left is of
computer generated posts, but these posts started before
22 Feb.
I think there were blocked because they contain SPAM rathern
than because they are computer-generaged...
Post by Retro Guy
Post by Anton Shepelev
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via
off- line meetings.
That's a great way to meet a lot of fbi agents.
These should be public meetings, even as Usenet is public.
--
() ascii ribbon campaign -- against html e-mail
/\ www.asciiribbon.org -- against proprietary attachments
Retro Guy
2024-04-07 13:11:38 UTC
Permalink
Post by Anton Shepelev
Post by Retro Guy
Nearly 100% of nocem listings since google left is of
computer generated posts, but these posts started before
22 Feb.
I think there were blocked because they contain SPAM rathern
than because they are computer-generaged...
I was referring to articles listed after Google Groups shutdown. For a
time, until Abaivia seems to have disappeared, there were 1,000 or more of
these posts to de.* groups that were listed in nocem. I know, I am one of
the generators of nocem.

For the time after GG, they were nearly 100% of nocem listings.
Paul
2024-03-27 14:37:32 UTC
Permalink
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
"The boy was not sure what he was doing in the forest. He had
been hiking for hours and thought he was at the edge of his
endurance. The summer heat and humidity were oppressive and
had left him feeling weak. He was seeking peace and quiet, a
place to meditate and escape the distractions of his busy life.
Maybe he was looking for treasure, but he did not know it.
He was annoyed that his cell phone had no signal, but he was
even more upset that his GPS had malfunctioned, and he had lost his way.

He thought he should be back at his vehicle by now. Unfortunately,
he was not sure where he was, and he was becoming increasingly
frustrated. He started to worry that he was lost. He was not
worried about being eaten by wild animals. There were none in
this part of the forest. He was, however, concerned that the
sun would soon set and that he would become disoriented and lost at night.

He was feeling a bit less confident than he usually did when
he was on a mountain hike. He had felt more at home in the rugged,
beautiful surroundings of the Alps, but he was not sure that
he had the endurance to blast his way out of this particular
situation. He was happy to traverse the rugged trails of the
mountains, but he was not convinced that he could battle his
way out of this. He was grateful that he was a healthy man,
but he was not sure that he had the strength to hike his way out
of the jungle.
"

That's the current state of AI for you.

https://en.wikipedia.org/wiki/I_Can't_Believe_It's_Not_Butter!

Paul
candycanearter07
2024-03-27 14:50:10 UTC
Permalink
["Followup-To:" header set to alt.free.newsservers.]
Post by Paul
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
"The boy was not sure what he was doing in the forest. He had
been hiking for hours and thought he was at the edge of his
endurance. The summer heat and humidity were oppressive and
had left him feeling weak. He was seeking peace and quiet, a
place to meditate and escape the distractions of his busy life.
Maybe he was looking for treasure, but he did not know it.
He was annoyed that his cell phone had no signal, but he was
even more upset that his GPS had malfunctioned, and he had lost his way.
He thought he should be back at his vehicle by now. Unfortunately,
he was not sure where he was, and he was becoming increasingly
frustrated. He started to worry that he was lost. He was not
worried about being eaten by wild animals. There were none in
this part of the forest. He was, however, concerned that the
sun would soon set and that he would become disoriented and lost at night.
He was feeling a bit less confident than he usually did when
he was on a mountain hike. He had felt more at home in the rugged,
beautiful surroundings of the Alps, but he was not sure that
he had the endurance to blast his way out of this particular
situation. He was happy to traverse the rugged trails of the
mountains, but he was not convinced that he could battle his
way out of this. He was grateful that he was a healthy man,
but he was not sure that he had the strength to hike his way out
of the jungle.
"
That's the current state of AI for you.
https://en.wikipedia.org/wiki/I_Can't_Believe_It's_Not_Butter!
Paul
The "current" state. It definitely could be a worrying prospect.
--
user <candycane> is generated from /dev/urandom
Johanne Fairchild
2024-03-27 20:39:37 UTC
Permalink
Post by Paul
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
[...]
Post by Paul
That's the current state of AI for you.
There's so much propaganda that people don't understand what it really
is and what it can do and not do.
Anton Shepelev
2024-04-07 00:41:45 UTC
Permalink
Post by Johanne Fairchild
There's so much propaganda that people don't understand
what it really is and what it can do and not do.
Pray educate us, fair sir, with at lest your theses about AI
is and what it can do.
--
() ascii ribbon campaign -- against html e-mail
/\ www.asciiribbon.org -- against proprietary attachments
David LaRue
2024-03-27 15:50:17 UTC
Permalink
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
There are likely some already. I've met one that is under several names in
several groups whose motive is an Eliza-like short disagreement answer to
everything.
Paul
2024-03-27 18:35:51 UTC
Permalink
Post by David LaRue
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
There are likely some already. I've met one that is under several names in
several groups whose motive is an Eliza-like short disagreement answer to
everything.
There is a known drinker who does that, and is also a nym shifter.
No, he's not a bot.

Paul
Scott Dorsey
2024-03-27 19:30:27 UTC
Permalink
Post by Paul
Post by David LaRue
There are likely some already. I've met one that is under several names in
several groups whose motive is an Eliza-like short disagreement answer to
everything.
There is a known drinker who does that, and is also a nym shifter.
No, he's not a bot.
It is surprising the number of people out there who cannot pass the
Turing test.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
White European
2024-03-27 17:33:21 UTC
Permalink
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
You need to fight them with your bow and arrows like tribesmen used to
do when fighting white Europeans who went to colonize them :).

AI is here and Usenet/newsgroups are not able to defend themselves.
Sooner or later one has to disappear from the surface of this planet. We
don't have tribesmen still fighting with their rudimentary weapons. Even
Islamists who are still living in caves have bombs and guns to fight
imperialists who try to disrupt their way of living.
grinch
2024-03-30 06:42:22 UTC
Permalink
Post by White European
Post by Anton Shepelev
Hello, all.
No sooner was Usenet purged of the plague of GoogleGropus
SPAM, than another has appeared on the horison. Since AI in
general and LLMs in particular are developing at break-neck
speed, social platforms may soon be infested by intelligent
bots that will be rather hard to distinguish from humans
(e.g. when the LLM is uncensored). Will it be the end of
online group-based communication? Is there any hope of
preventing or at least staving off this new apocalypse?
I for one have only one idea: a heterarchical redundant
mutual cross verification of users by each other via off-
line meetings.
You need to fight them with your bow and arrows like tribesmen used to
do when fighting white Europeans who went to colonize them :).
AI is here and Usenet/newsgroups are not able to defend themselves.
Sooner or later one has to disappear from the surface of this planet. We
don't have tribesmen still fighting with their rudimentary weapons. Even
Islamists who are still living in caves have bombs and guns to fight
imperialists who try to disrupt their way of living.
AI has two primary weaknesses. It can't survive without electricity or
interaction / communications. Cut off one or the other and it's helpless.

If you want to cripple a country these days, just fire some rockets into
data centers. That is the inherent weakness of the "cloud".
Anton Shepelev
2024-04-05 14:13:50 UTC
Permalink
Post by grinch
If you want to cripple a country these days, just fire
some rockets into data centers. That is the inherent
weakness of the "cloud".
Then Russia has failed miserably in crippling Ukraine,
despite its overwhelming advantage in ballistic missiles. Or
did not try to.
--
() ascii ribbon campaign -- against html e-mail
/\ www.asciiribbon.org -- against proprietary attachments
Marcel Zant
2024-04-06 19:52:28 UTC
Permalink
Post by Anton Shepelev
Post by grinch
If you want to cripple a country these days, just fire
some rockets into data centers. That is the inherent
weakness of the "cloud".
Then Russia has failed miserably in crippling Ukraine,
despite its overwhelming advantage in ballistic missiles. Or
did not try to.
Ballistic missiles are an end all. Russia does not want that as it ruins
their expansion plans for thousands of years. They are using Ukraine and
Israel to repay the USA for what Reagan did to them and the Biden
administration is too blind to see it.

Russia is also using Ukraine to get rid of unwanted dissidents, members of
their families, unwanted legacy armaments while gathering expended
weaponry for reverse engineering. Everyone is happily cluelessly
acquiescing to the Russian wishes.

Russia has the upper hand in Europe / Asia.

It's very simple. If Ukraine gets supplies and equipment from Europe,
most of the benefactors must still buy oil and gas from Russia to
manufacture and transport it, even operate it. Whatever the USA sends
incurs double the same costs for import and transport. Russia gets paid
or they cut off the gas during winter.

So who is winning?
Phil Hendry's Chop Shop
2024-04-08 14:24:14 UTC
Permalink
On Sat, 6 Apr 2024 19:52:28 -0000 (UTC)
Post by Marcel Zant
Ballistic missiles are an end all. Russia does not want that as it
ruins their expansion plans for thousands of years. They are using
Ukraine and Israel to repay the USA for what Reagan did to them
Why then do we share something as _vital to national security_ as our
SPACE PROGRAM with the Russians?

Why would any nation do something like THAT with their alleged "mortal
frenemies"?

https://www.nasa.gov/history/50-years-ago-the-united-states-and-the-soviet-union-sign-a-space-cooperation-agreement/


"On May 24, 1972, during their summit meeting in Moscow, the leaders of
the United States and the Soviet Union, President Richard M. Nixon and
Premier Aleksei N. Kosygin, signed an agreement on cooperation in
space. One of its articles called for the development of a joint system
to allow their spacecraft to dock with each other in orbit, laying the
groundwork for the Apollo-Soyuz Test Project, the first international
human spaceflight carried out in July 1975."

Wakey, wakey cupcake - not only did Tricky Dick open the door to the
Chicoms he put us in "outer" space with the USSR!
Phil Hendry's Chop Shop
2024-04-08 14:24:40 UTC
Permalink
On Fri, 5 Apr 2024 17:13:50 +0300
Post by Anton Shepelev
Russia has failed miserably in crippling Ukraine,
despite its overwhelming advantage in ballistic missiles. Or
did not try to.
Why do we share something as vital to national security as our SPACE
PROGRAM with the Russians?

Why would any nation do something like THAT with their alleged "mortal
frenemies"?

https://www.nasa.gov/history/50-years-ago-the-united-states-and-the-soviet-union-sign-a-space-cooperation-agreement/


"On May 24, 1972, during their summit meeting in Moscow, the leaders of
the United States and the Soviet Union, President Richard M. Nixon and
Premier Aleksei N. Kosygin, signed an agreement on cooperation in
space. One of its articles called for the development of a joint system
to allow their spacecraft to dock with each other in orbit, laying the
groundwork for the Apollo-Soyuz Test Project, the first international
human spaceflight carried out in July 1975."

Wakey, wakey cupcake - not only did Tricky Dick open the door to the
Chicoms he put us in "outer" space with the USSR!

Paul
2024-04-05 17:57:56 UTC
Permalink
Post by grinch
If you want to cripple a country these days, just fire some rockets into
data centers. That is the inherent weakness of the "cloud".
Not the current strategy. You can tell the strategy
is to hack into infrastructure and cripple it. That's
why people are working at breaking into the control
systems on the drinking water supply. And in the
past, on the electricity supply control systems.

That's how some high speed centrifuges were destroyed
underground in Iran.

Precision bombing today, is for sending messages.

Loading Image...

See how personalized the delivery is there ? :-/

Paul
Loading...