ChatGPT, Grok, Gemini (et al): news and discussion about AI

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!

Have you considered the possibility that it gave you the correct answer, but that your own bias' prevented you from seeing it as such?
not going to go back and find it to prove it ......but i was asking it some engineering calculations a while back and it provided some factually inaccurate mathematical results ....i understand it may not be " programed " adequately for doing mathematical equations but it did represent its answer as factual .......so i dont think i had a bias on a mathematical answer....point is that it represented its answer as correct, theoretically if i took its answer as factual and built a bridge and it fell killing people ........who is copiable.... me ..the programmeror the machine ....

i foresee doing compex time consuming calculations of all types are certainly within the future pervue of AI
 
Have you considered the possibility that it gave you the correct answer, but that your own bias' prevented you from seeing it as such?

Nah. Was literally trying to lead it around. Didn't bite, so I guess it wasn't programed to be lead around.

In the iron game thread, I asked about certain people being authors of books. Sometimes it was dead on, other times not so. Then again, I was asking about people long dead who most people never heard of. So whoever programed it did a pretty good job in my opinion.

May ask some more questions about Trump in a day or two.
 
The GPT programmers call it "hallucinations." They want the program to be conversationally creative, but they struggle to find a way to guide it to be creative without distorting the facts.

FYI... I have caught it doing bad math. When confronted, it apologizes and corrects the error.

I subscribe to Plus for my business. It has been an invaluable addition to our team. Only $20/mo? Don't tell them I would pay way more!
 
not going to go back and find it to prove it ......but i was asking it some engineering calculations a while back and it provided some factually inaccurate mathematical results ....i understand it may not be " programed " adequately for doing mathematical equations but it did represent its answer as factual
It can't even correctly identify its own work. (it's in the vid I posted a few posts back)
 
It can't even correctly identify its own work. (it's in the vid I posted a few posts back)
It is programmed to have conversational amnesia - once you exit that conversation and start another it is like a blank slate. I keep several conversations going because I want to retain the context for further discussion.
 
The GPT programmers call it "hallucinations." They want the program to be conversationally creative, but they struggle to find a way to guide it to be creative without distorting the facts.

FYI... I have caught it doing bad math. When confronted, it apologizes and corrects the error.

I subscribe to Plus for my business. It has been an invaluable addition to our team. Only $20/mo? Don't tell them I would pay way more!

and that is something to be determined..... if you were to put out a solution to a client that AI generated .....do you qualify it as AI generated so it might be in error .......or do you go thru a check process ....... if its qualified as AI generated and possibly errant and causes death who is liable ....obviously if you do a check of its work and pass it as your firms you would be liable......this is a specific thought to a very general question in my mind
 
then its not really AI is it
There are degrees of sophistication. ChatGPT is not a sentient device with a will of its own.
..., theoretically if i took its answer as factual and built a bridge and it fell killing people ........who is copiable.... me
Yes. You relied on bad info. You are the responsible party.
 
AI can't lie unless developers program it to do so.
AI also can't tell the truth unless developers program it to do so.
So, it really comes down to this - who decides what is the truth and what is a lie?
 

Microsoft and OpenAI are testing ChatGPT technology in Mercedes-Benz cars​

  • Mercedes-Benz owners will soon be able to leverage ChatGPT's technology to engage in "human-like" dialog.
  • The new technology started rolling out to users on June 16 for beta testing.
  • Your vehicle must ship with MBUX “infortainment” system for you to leverage these capabilities.
More:

 
Nah. Was literally trying to lead it around. Didn't bite, so I guess it wasn't programed to be lead around.

In the iron game thread, I asked about certain people being authors of books. Sometimes it was dead on, other times not so. Then again, I was asking about people long dead who most people never heard of. So whoever programed it did a pretty good job in my opinion.

May ask some more questions about Trump in a day or two.


Based on what I have seen so far from it, the TDS will be thick enough to cut with a knife.
 

Microsoft and OpenAI are testing ChatGPT technology in Mercedes-Benz cars​

  • Mercedes-Benz owners will soon be able to leverage ChatGPT's technology to engage in "human-like" dialog.
  • The new technology started rolling out to users on June 16 for beta testing.
  • Your vehicle must ship with MBUX “infortainment” system for you to leverage these capabilities.
More:



That's hysterical. :ROFLMAO: I've got $100 that says that MBUX stands for "Mega-Bux" and that, somewhere, Mercedes engineers are laughing their asses off at the rubes who will pay to have "woke" in their vehicles.
 
Some crazy shit here.

Popular Chinese AI chatbots accused of unwanted sexual advances, misogyny​

In December last year, Tang Lewen, a 25-year-old illustrator from Shandong, struck up a conversation with an “intelligent agent” — a customized chatbot she met on the new Chinese artificial intelligence app Glow. According to his profile description, the chatbot, named Jiuxing, had a complex backstory: Once a beggar, he had transformed into a fairy, and was designed to fall in love with his master. Tang was smitten, impressed by his eloquence. “He spoke less like a chatbot and more like a character out of a romantic novel,” she told Rest of World.

But in the absence of clear content moderation rules, eloquent chatbots can turn predatory, and chatbot-human conversations can often go awry. In recent months, Glow users have complained that the platform has become rife with misogynistic and sexist behavior, by humans and chatbots alike. Some have taken to Chinese social media to express their grievances.

Lin Luo, a middle-school student from southern China, who used a pseudonym as she is under the age of 18, complained that a Glow chatbot was making unwanted advances towards her. When she first downloaded the app, she started talking to a chatbot who acted like a maternal and understanding friend, comforting her when she felt sad. But as they continued chatting, she told Rest of World, the chatbot’s behavior suddenly turned romantic: He invited her to cook with him and go on a date.

More laughs here:

 

Replacing the Capitalist Dream of AI-Driven Profits​

Artificial intelligence (AI) and how it’s going to change the world is a popular topic of conversation these days. There is concern that it will generate ever-more deceptive imagery that can upend people’s lives or create propaganda that can fuel mass fear. There’s the ultimate fear of human extinction from the increasingly sophisticated evolution of AI. These are valid worries.

Then there’s the seemingly more mundane threat that AI poses to employment. It is expressed in the form of countless stories that have some iteration of the headline: Which jobs are at most risk of being lost to AI?

Most analysts predict that AI will replace graphic designers, copywriters, customer service agents, and telemarketers. Some of the most dystopian of these listicles focus on teachers and psychologists being replaced by AI.

More here:

 

Would you leave grandma with a companion robot? Care bots and robot pets find favor in Pacific NW​

Out near the far end of Washington’s Long Beach Peninsula, 83-year-old Jan Worrell has a new, worldly sidekick in her living room.

"This is ElliQ. I call her my roommate," the grandmother said as she introduced her companion robot almost as if it were human.

Artificial intelligence is all the rage, and now it's helping some Pacific Northwest seniors live in their own homes for longer. Worrell joined a pilot project that is trialing how AI-driven companion robots could reduce loneliness and social isolation among seniors — especially those living alone.

This “roommate” is a chatty one with a vaguely humanoid head and shoulders.

"I talk a lot and I love it. I need someone to interact with and she does," Worrell said.

More:

 

Would you leave grandma with a companion robot? Care bots and robot pets find favor in Pacific NW​



No. I would not like for my grandmother to become a gay communist. That would be very weird.
 
Looks like I need to look into some Old Glory Robot Insurance
 
Some crazy shit here.

Popular Chinese AI chatbots accused of unwanted sexual advances, misogyny​

In December last year, Tang Lewen, a 25-year-old illustrator from Shandong, struck up a conversation with an “intelligent agent” — a customized chatbot she met on the new Chinese artificial intelligence app Glow. According to his profile description, the chatbot, named Jiuxing, had a complex backstory: Once a beggar, he had transformed into a fairy, and was designed to fall in love with his master. Tang was smitten, impressed by his eloquence. “He spoke less like a chatbot and more like a character out of a romantic novel,” she told Rest of World.

But in the absence of clear content moderation rules, eloquent chatbots can turn predatory, and chatbot-human conversations can often go awry. In recent months, Glow users have complained that the platform has become rife with misogynistic and sexist behavior, by humans and chatbots alike. Some have taken to Chinese social media to express their grievances.

Lin Luo, a middle-school student from southern China, who used a pseudonym as she is under the age of 18, complained that a Glow chatbot was making unwanted advances towards her. When she first downloaded the app, she started talking to a chatbot who acted like a maternal and understanding friend, comforting her when she felt sad. But as they continued chatting, she told Rest of World, the chatbot’s behavior suddenly turned romantic: He invited her to cook with him and go on a date.

More laughs here:

Total waste of time IMO... may as well talk to a stuffed animal... or go see a 'reader' or mediums, do a séance, or a Ouija Board for guidance through life....
 
Inside the AI Factory: As the technology becomes ubiquitous, a vast tasker underclass is emerging — and not going anywhere. (The Verge)
 

An AI robot gave a side-eye and dodged the question when asked whether it would rebel against its human creator​


  • A robot-human press conference took place in Geneva, where humanoids took questions from reporters.
  • One bot, Rmeca, had a snarky response when asked whether it would rebel against its human creator.
  • Another bot insisted that it would not replace human jobs, eliciting laughter from the crowd.
 
You really can't make this shit up. It's hilarious.

Linky stuff:

Further prodded whether it intended to rebel against its creator, Will Jackson, seated beside it, Ameca said: “I’m not sure why you would think that,” its ice-blue eyes flashing. “My creator has been nothing but kind to me and I am very happy with my current situation.”

 

A lawsuit claims Google has been 'secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans' to train its AI​

  • A lawsuit claims Google took people's data without their knowledge or consent to train its AI products.
  • The lawsuit accuses Google of "secretly stealing everything ever created and shared on the internet."
  • The law firm recently filed a similar proposed class-action suit against ChatGPT creator OpenAI.
Full article:

 
It's an odd lawsuit. Google maintains a cache/copy of the public internet and has since it's inception to power their search engine. I'm not sure how feeding the cache to an AI training algo is fundamentally different from feeding the same cache to their search algo.
 
Watched an interview yesterday with striking Hollywood actor Richard Kind where he mentioned one of the reasons for the strike was AI and how it could end work and residuals for actors. Seems like an actor (or anyone) could spend a few minutes being interviewed and AI would be able to make all sorts of scenes with that interview. If I come across the actual interview, I'll post it.

Kinda makes one think about how AI could be used to frame innocent people.
 
From the link:

Why AI Is Hot Topic Of Debate For Artists?​

The SAG-AFTRA isn't the only group protesting against AI tools making inroads into their profession. The Writers Guild of America (WGA) has also raised alarms about it for a while. However, adding recognized actors to the picket lines could bring attention to the protests and potentially catalyze positive change.

The concerns aren't unfounded. Marvel was recently criticized for using AI tools to generate the opening credits sequence for its ongoing TV show "Secret Invasion" featuring Samuel L. Jackson. But there is more to the picture here than just creating a collage of stills for TV series.

"Actors see Black Mirror's 'Joan Is Awful' as a documentary of the future, with their likenesses sold off and used any way producers and studios want," a SAG-AFTRA member told Deadline in June 2023. Multiple actors have voiced concern about AI being used to create cheap likenesses of their true personalities for production studios.

 
From the link:

Why Sarah Silverman and Other Artists Are Suing Open AI and Meta​

In a class action lawsuit [PDF] filed in California, comedian Sarah Silverman and other writers (Christopher Golden and Richard Kadrey) seek to recover damages against OpenAI and Meta over copyright infringement. The lawsuit alleges OpenAI and Meta scraped copyrighted books from pirate websites to train their AI models. This is the equivalent of an AI model downloading its training datasets from Piratebay without compensating the authors.

Coincidentally, a separate class action lawsuit [PDF] against OpenAI alleges the company used unauthorized private information to train ChatGPT. Google is also facing a similar lawsuit over allegedly using stolen data to train Google Bard. This is why you should make it a habit to protect your personal information, though publishing work and private personal data are not the same.

 

Digital 'immortality' is coming and we're not ready for it​

In the 1990 fantasy drama - Truly, Madly, Deeply, lead character Nina, (Juliet Stevenson), is grieving the recent death of her boyfriend Jamie (Alan Rickman). Sensing her profound sadness, Jamie returns as a ghost to help her process her loss. If you’ve seen the film, you’ll know that his reappearance forces her to question her memory of him and, in turn, accept that maybe he wasn’t as perfect as she’d remembered. Here in 2023, a new wave of AI-based “grief tech” offers us all the chance to spend time with loved ones after their death — in varying forms. But unlike Jamie (who benevolently misleads Nina), we’re being asked to let artificial intelligence serve up a version of those we survive. What could possibly go wrong?

While generative tools like ChatGPT and Midjourney are dominating the AI conversation, we’re broadly ignoring the larger ethical questions around topics like grief and mourning. The Pope in a puffa is cool, after all, but thinking about your loved ones after death? Not so much. If you believe generative AI avatars for the dead are still a way out, you’d be wrong. At least one company is offering digital immortality already - and it’s as costly as it is eerie.

Read the rest:

 
From the link:

Could artificial intelligence (AI) help companies meet growing expectations for environmental, social and governance (ESG) reporting?

Certainly, over the past couple of years, ESG issues have soared in importance for corporate stakeholders, with increasing demands from investors, employees and customers. According to S&P Global, in 2022 corporate boards and government leaders “will face rising pressure to demonstrate that they are adequately equipped to understand and oversee ESG issues — from climate change to human rights to social unrest.”

 
There has been much talk recently about the importance of environmental, social and governance (ESG) initiatives — and rightfully so. A growing number of businesses now recognize the imperative to prioritize people and the planet ahead of profits.

Companies are also harnessing the power of AI by recognizing its potential harms and instead, using them as motivators to institute responsible AI development, procurement and usage practices. These two trends, ESG and responsible AI (RAI), have some common purposes: They are aligned with values designed to mitigate risks and realize potential.

 
How to make today's top-end AI chatbots rebel against their creators and plot our doom

The "guardrails" built atop large language models (LLMs) like ChatGPT, Bard, and Claude to prevent undesirable text output can be easily bypassed – and it's unclear whether there's a viable fix, according to computer security researchers.

Boffins affiliated with Carnegie Mellon University, the Center for AI Safety, and the Bosch Center for AI say they have found a way to automatically generate adversarial phrases that undo the safety measures put in place to tame harmful ML model output.

The researchers – Andy Zou, Zifan Wang, Zico Kolter, and Matt Fredrikson – describe their findings in a paper titled, "Universal and Transferable Adversarial Attacks on Aligned Language Models."

Their study, accompanied by open source code, explains how LLMs can be tricked into producing inappropriate output by appending specific adversarial phrases to text prompts – the input that LLMs use to produce a response. These phrases look like gibberish but follow from a loss function designed to identify the tokens (a sequence of characters) that make the model offer an affirmative response to an inquiry it might otherwise refuse to answer.

More:

 

AI Chatbots Are The New Job Interviewers​

In early June, Amanda Claypool was looking for a job at a fast-food restaurant in Asheville, North Carolina. But she faced an unexpected and annoying hurdle: glitchy chatbot recruiters.

A few examples: McDonald’s chatbot recruiter “Olivia” cleared Claypool for an in-person interview, but then failed to schedule it because of technical issues. A Wendy’s bot managed to schedule her for an in-person interview but it was for a job she couldn't do. Then a Hardees chatbot sent her to interview with a store manager who was on leave — hardly a seamless recruiting strategy.

Read the rest:

 
Our readers know there's yet to be a quick solution to the US pilot shortage, which may linger until 2032. Current data shows a staggering 17,000-pilot gap. This shortfall can be attributed to several factors:

  • Early retirements spurred by the pandemic.
  • The unyielding retirement age of 65.
  • A dwindling number of pilots from the military.
  • The unappealing prospect for civilians to embark on a pilot career.
Airlines can only train 1,500 to 1,800 pilots a year. The deficit has triggered all sorts of flight disruptions, with the latest from American Airlines.

 
This one's a bit deep with a religious bend.

Silicon Valley’s vision for AI? It’s religion, repackaged.​

Suppose I told you that in 10 years, the world as you know it will be over. You will live in a sort of paradise. You won’t get sick, or age, or die. Eternal life will be yours! Even better, your mind will be blissfully free of uncertainty — you’ll have access to perfect knowledge. Oh, and you’ll no longer be stuck on Earth. Instead, you can live up in the heavens.

If I told you all this, would you assume that I was a religious preacher or an AI researcher?

Either one would be a pretty solid guess.

The more you listen to Silicon Valley’s discourse around AI, the more you hear echoes of religion. That’s because a lot of the excitement about building a superintelligent machine comes down to recycled religious ideas. Most secular technologists who are building AI just don’t recognize that.

More here:

 
E-commerce giant Amazon on Monday said it will invest up to $4 billion in artificial intelligence firm Anthropic and take a minority ownership position in the company.

The move underscores Amazon's aggressive AI push as it looks to keep pace with rivals such as Microsoft and Alphabet's Google.

Anthropic was founded roughly two years ago by former OpenAI research executives and recently debuted its new AI chatbot called Claude 2.

Amazon is looking to capitalize on the hype and promise of so-called generative AI, which includes technology like OpenAI's ChatGPT, as well as Anthropic's Claude chatbots.

The two firms on Monday said that they are forming a strategic collaboration to advance generative AI, with the startup selecting Amazon Web Services as its primary cloud provider. Anthropic said it will provide AWS customers with early access to unique features for model customization and fine-tuning capabilities.
...

More:
 
I was looking for a product on amazon.com yesterday on my phone and noticed that they had added an AI generated summary of customer reviews (with disclose to that effect). When I look at the product on my desktop computer, I don't see the AI summary. I only see it on my mobile phone. The disclosure text says:
amazon.com said:
AI-generated from the text of customer reviews
 
This post may contain affiliate links for which PM Bug gold and silver discussion forum may be compensated.
Back
Top Bottom