ChatGPT, Grok, Gemini (et al): news and discussion about AI

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!

Man raped in jail after AI technology wrongfully identifies him in robbery, suit says​

On Jan. 22, 2022, a Sunglass Hut in Houston was robbed by two armed men. The men stole thousands of dollars from the store, according to a lawsuit.

During the robbery, the men ordered two employees into a back room and told them to stay there so they could get away, court records say.

As police were investigating, they got a call from a loss prevention employee with EssilorLuxottica, which is Sunglass Hut’s parent company. The employee told police they “could stop their investigation because he found their guy,” the lawsuit said.

The employee said he worked with Macy’s loss prevention using artificial intelligence and facial recognition software to identify the suspect as the 61-year-old man, the lawsuit said.

https://www.yahoo.com/news/man-raped-jail-ai-technology-210846029.html
 

It's true, LLMs are better than people – at creating convincing misinformation​

Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans.

Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set out to examine whether LLM-generated misinformation can cause more harm than the human-generated variety of infospam.

In a paper titled, "Can LLM-Generated Information Be Detected," they focus on the challenge of detecting misinformation – content with deliberate or unintentional factual errors – computationally. The paper has been accepted for the International Conference on Learning Representations later this year.

More:

 
I suppose there is a good chance that the advent of LLMs/AI is going to lead to a rebirth of the value in critical thinking and due diligence.
 
Battle of the bots

Condensed version of an article taken from here https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-023-00440-3

Armies of bots battled on Twitter over Chinese spy balloon incident​

Tens of thousands of bots tussled on Twitter to try to shape the debate as a Chinese spy balloon flew over the US and Canada last year, according to an analysis of social media posts.

Kathleen Carley and Lynnette Hui Xian Ng at Carnegie Mellon University in Pennsylvania tracked nearly 1.2 million tweets posted by more than 120,000 users on Twitter – which has since been renamed X – between 31 January and 22 February 2023. All tweets contained the hashtags #chineseballoon and #weatherballoon, discussing the controversial airborne object that the US claimed China had used for spying.

The spy balloon saga of 2023 inflated US-China political tensions

The tweets were then geolocated using Twitter’s location feature, and checked with an algorithm called BotHunter, which looks for signs that an account isn’t controlled by a human.

“There are lots of different things [identifying a bot] is based off, but examples are whether your messages are being sent out so fast that a human literally can’t type that fast, or if you’re geotagged in London one minute, then in New Zealand when it’s physically impossible for a person to do so,” says Carley.

More:

 

Your AI Girlfriend Is a Data-Harvesting Horror Show​

Lonely on Valentine’s Day? AI can help. At least, that’s what a number of companies hawking “romantic” chatbots will tell you. But as your robot love story unfolds, there’s a tradeoff you may not realize you’re making. According to a new study from Mozilla’s *Privacy Not Included project, AI girlfriends and boyfriends harvest shockingly personal information, and almost all of them sell or share the data they collect.

“To be perfectly blunt, AI girlfriends and boyfriends are not your friends,” said Misha Rykov, a Mozilla Researcher, in a press statement. “Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

Mozilla dug into 11 different AI romance chatbots, including popular apps such as Replika, Chai, Romantic AI, EVA AI Chat Bot & Soulmate, and CrushOn.AI. Every single one earned the Privacy Not Included label, putting these chatbots among the worst categories of products Mozilla has ever reviewed. The apps mentioned in this story didn’t immediately respond to requests for comment.

You’ve heard stories about data problems before, but according to Mozilla, AI girlfriends violate your privacy in “disturbing new ways.” For example, CrushOn.AI collects details including information about sexual health, use of medication, and gender-affirming care. 90% of the apps may sell or share user data for targeted ads and other purposes, and more than half won’t let you delete the data they collect. Security was also a problem. Only one app, Genesia AI Friend & Partner, met Mozilla’s minimum security standards.

More:

 
^^^^^^

Don’t date robots — their privacy policies are terrible​

Talkie Soulful Character AI, Chai, iGirl: AI Girlfriend, Romantic AI, Genesia - AI Friend & Partner, Anima: My Virtual AI Boyfriend, Replika, Anima: AI Friend, Mimico - Your AI Friends, EVA AI Chat Bot & Soulmate, and CrushOn.AI are not just the names of 11 chatbots ready to play fantasy girlfriend — they’re also potential privacy and security risks.

A report from Mozilla looked at those AI companion apps, finding many are intentionally vague about the AI training behind the bot, where their data comes from, how they protect information, and their responsibilities in case of a data breach. Only one (Genesia) met its minimum standards for privacy.

Wired says the AI companion apps reviewed by Mozilla “have been downloaded more than 100 million times on Android devices.”

More:

 
Google Gemini is pro-pedophile and even refers to them as “minor-attracted persons” (MAPS).

AI does not have individual thought. It makes decisions based on an accumulation of data presented to it.

Meaning Gemini was taught this. This is the world the left wants.


 

Walmart CEO confirms new AI technology that will change checkouts forever - but will you want cameras prying into your cart?​

  • The tech is being be rolled out to 600 Sam's Club stores this year
  • Sam's Club is owned by Walmart - which may use the tech in Walmart too
  • AI means employees will not have to check customer receipts at store exits
A checkout change is on the way at Sam's Club stores aimed at speeding up wait times for shoppers, the boss of its parent company Walmart has confirmed.

The membership-based warehouse store - Walmart's version of Costco - will allow customers to scan and pay for groceries with an app on their phone and just walk straight out.

'At Sam's Club US, we're rolling out new exit technology that enables our members to use scan and go to just walk out after completing their transaction on their phone,' Walmart CEO Doug McMillon said in an earnings call last week.

More:

 

AI Helps Litigation Funders Mine Court Dockets for Legal Gold​

  • Legalist Inc. uses the ‘truffle sniffer’ to find cases
  • Case Miner is the AI search tool for firm Qanlex
Companies that seek profits by funding lawsuits are using AI to help find cases to invest in, even as skepticism lingers about the tool’s usefulness.
Legalist Inc. built an algorithm called “the truffle sniffer” to search for lawsuits by focusing on variables such as the court, judge and case type. The tool was essential for the alternative asset manager’s growth, with $901 million now under management after its 2016 founding.

“The AI is a really crucial part of sourcing,” said Eva Shang, Legalist’s chief executive officer. She co-founded the company with the help of a $100,000 grant from billionaire Peter Thiel’s foundation after dropping out of Harvard.

Whether companies use AI to decide which cases to fund, or simply for research, the tool is becoming increasingly common in the $13.5 billion litigation finance industry, in which investors fund lawsuits and take a portion of any successful awards.

More:

 
I suppose there is a good chance that the advent of LLMs/AI is going to lead to a rebirth of the value in critical thinking and due diligence.
Hopefully, but I'm thinkin' it'll end up going in the exact opposite direction.

Right now AI is still being dialed in, so to speak. Once they get it to where most people see it as relatively accurate, it will become readily adopted.

What I see, is a World where no one needs critical thinking skills, as whenever a problem comes up, just ask your AI what to do. If the average learns that they can get better answers than they could have come up with themselves, and if it happens over and over, why wouldn't people end up just letting it figure everything out for them? People always end up taking what looks like the easy way, and thinking can be hard. Asking AI and getting an instant answer, is easy peasy.
.....and I'm talkin' down the road a bit, not right now. Ie: where all this AI stuff will eventually lead society over the next generation if it keeps going at the rate it is, or even accelerates. As is likely.

Just like how today, thanks to cell phones, no one can hardly remember a phone number. Even their own a lotta times.
Because when it's all right there in an electronic list, most see no need to commit those #'s to memory anymore. The average can now put those brain cells to work remembering some bs about pop culture instead.

Or how GPS makes it where most people no longer need to create a mental map in their head in order to know how to get around. Just let the GPS do that mental work for us.
Sure, occasionally it'll direct someone to drive into a lake, but the vast majority of time it does just fine for the average.

If AI gets to the point that most see it's answers and decisions as being better than their own, that logic and reasoning part of most peoples brains might shut off for everything greater than the most trivial of tasks.
Think Idiocracy, but with smart computers directing everything anyone does.

I've read often where people ponder what kind of World we're headed for. Usualy along the lines of 1984, or perhaps some version of a Brave New World.

However, after some thinking about where this stuff might be headed, I think it'll be more like 2112, with the Priests being the gov and tech companies. Ie: fascism.

We’ve taken care of everything
The words you read
The songs you sing
The pictures that give pleasure
To your eye
One for all and all for one
Work together
Common sons
Never need to wonder
How or why


Except there is no Solar Federation coming to save us.
 
Last edited:
My father has joked for many decades that he wanted to create an online church where people could visit to get some randomly generated spiritual guidance and then pay a tithe. A tax free money suck that doesn't require any human effort (beyong the initial set up). I knew it wouldn't be long before someone used AI to do this.
 
My father has joked for many decades that he wanted to create an online church where people could visit to get some randomly generated spiritual guidance and then pay a tithe. A tax free money suck that doesn't require any human effort (beyong the initial set up). I knew it wouldn't be long before someone used AI to do this.
 

MyShell, Blockchain Platform For Building 'AI Girlfriends' and Productivity Apps, Raises $11M​

MyShell, a Tokyo-based decentralized AI platform used to create "AI girlfriends," has raised $11 million in pre-series A funding for its AI app-building ecosystem.

The investment, which brings MyShell's total funding to $16.6 million, was led by Dragonfly, with additional participation from Delphi Ventures, Bankless Ventures, Maven11 Capital, Nascent, Nomad Capital and OKX Ventures. The round also attracted support from individual investors such as crypto investor and thought leader Balaji Srinivasan, NEAR's Illia Polosukhin and Paradigm's Casey K. Caruso.

MyShell pitches itself as an ecosystem for AI creators to develop and deploy applications. The new funds will be used to further the development of its open-source foundational model and to support its community "of over 1 million registered users and 50,000 creators," according to a statement.

MyShell aims to distinguish itself from other AI platforms with its focus on decentralization: The company has open-sourced key parts of its codebase and leans on blockchain tools to assist with model training and creator royalties.

Crypto will "uniquely enable" MyShell to "help creators and AI model researchers to monetize their work," and blockchains will serve as the foundation for "a dedicated platform to trade, and own AI assets," MyShell CEO Ethan Sun told CoinDesk in an email.

More:

 
Back
Top Bottom