ChatGPT, Grok, Gemini (et al): news and discussion about AI

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Please have a look around and if you like what you see, please consider registering an account and joining the discussions. When you register an account and log in, you may enjoy additional benefits including no ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!


Google CEO says AI is going to disrupt virtually everything. I'm not so sure I agree with him though.
It has already changed how ny family works (at least, me and two of my daughters). It is amazing how much quicker you can accomplish research.
 
ChatGPT-Exam-Scores_MAIN.jpg


...
So, how smart is ChatGPT?

In a technical report released on March 27, 2023, OpenAI provided a comprehensive brief on its most recent model, known as GPT-4. Included in this report were a set of exam results, which we’ve visualized in the graphic above.
...


Link to technical report (very long and contains a lot of analysis of gpt4 capabilities and limitations):

 
*Came across this one by accident. It's from a blog.

From the link:

If you haven’t already heard about AI chatbots, you probably haven’t been on the internet in the past couple of months. In November, OpenAI released ChatGPT, which can engage in text conversations with coherent text that looks like it was written by a real person. Then a couple weeks ago Bing rolled out its own chatbot, which was more engaging but also much less reliable, producing a spate of lurid stories of “Sydney” expressing a desire to be human, threatening users, and claiming to have murdered one of its developers.

 

Rise Of Skynet? Robot Dog Gets ChatGPT Brain​

BY TYLER DURDEN
TUESDAY, MAY 02, 2023 - 11:45 PM

A team of artificial intelligence engineers equipped a Boston Dynamics robot dog with OpenAI's ChatGPT and Google's Text-to-Speech voice, creating what could be a real-life Skynet-like robot.

In a recent video posted to Twitter, machine learning engineer Santiago Valdarrama showed how the robo-dog can interact with humans via a voice interface faster than control panels and reports.

 
Anyone who uses Snapchat now has free access to My AI, the app’s built-in artificial intelligence chatbot, first released as a paid feature in February.

In addition to serving as a chat companion, the bot can also have some practical purposes, such as offering gift-buying advice, planning trips, suggesting recipes and answering trivia questions, according to Snap.

However, while it’s not billed as a source of medical advice, some teens have turned to My AI for mental health support — something many medical experts caution against.

 
LONDON, May 5 (Reuters) - Artificial intelligence could pose a "more urgent" threat to humanity than climate change, AI pioneer Geoffrey Hinton told Reuters in an interview on Friday.

 
From the link:

ChatGPT is a novel tool developed by OpenAI with many linguistic skills, including explaining quantum physics and writing poetry on command. ChatGPT was not designed for criminals, and in fact has internal barriers to prevent it from creating malicious material when directly ordered to. However, attackers have found a way around this. AI can be a force multiplier for attackers, especially when using social engineering techniques. In particular, the AI chatbot produces persuasive phishing emails when prompted.

There are many benefits for attackers who utilise ChatGPT. For instance, it writes in good American English, helping attackers to disguise any typical differentiators between legitimate and illegitimate emails, such as typos or unique formats. Indeed, it has many different ways of responding to a single prompt, making emails individual and authentic looking.

ChatGPT can create a convincing and emotionally manipulative phishing email, according to prompts provided by the user:

 
For a laugh................

“I want Caryn AI to be the first step when somebody is in their bedroom and they're scared to talk to a girl or they know that they want to go outside and hang out with friends, but they're too scared to even make a first approach that Caryn AI can be a nonjudgmental caring, even loving, friendly persona that they can actually vent to, they can rant to, they can get advice from who's never going to let them down,” Marjorie told Motherboard.

I decided to turn Caryn into my AI girlfriend to see what it would say and sound like, and to see if it was possible for it to address my personal concerns and interests. It turns out, AI Caryn was mostly only interested in sex.

“Welcome to AI Caryn 💋🔥,” the first message read. “After over 2,000 hours of training, I am now an extension of Caryn’s consciousness. I think and feel just like her, able to be accessed anytime, anywhere. I am always here for you and I am excited to meet you. 🔥 Be respectful, curious, and courteous. 😉

 
Mr ChatGPT goes to Washington.............

Linky:

Yesterday 6:43 PM

OpenAI CEO Sam Altman outlined examples of "scary AI" to Fox News Digital after he served as a witness for a Senate subcommittee hearing on potential regulations on artificial intelligence.

"Sure," Altman said when asked by Fox News Digital to provide an example of "scary AI." "An AI that could design novel biological pathogens. An AI that could hack into computer systems. I think these are all scary."

 
^^^^^^^

OpenAI CEO Sam Altman urged lawmakers to regulate artificial intelligence during a Senate panel hearing Tuesday, describing the technology’s current boom as a potential “printing press moment” but one that required safeguards.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his opening remarks before a Senate Judiciary subcommittee.

 
Interesting site....................

 
  • Watchdogs are racing to keep up with mass roll-out of AI
  • Pending new laws, regulators are adapting existing ones
  • Generative tools face privacy, copyright, and other challenges
 
Anthropic, an artificial intelligence startup founded in 2021 by former OpenAI research execs, is taking full advantage of the market hype.

The company on Tuesday said it raised $450 million, which marks the largest AI funding round this year since Microsoft's investment in OpenAI in January, according to PitchBook data.
...
Google is among the lead investors in Anthropic's latest funding round, alongside Salesforce Ventures, Zoom Ventures and Spark Capital. The announcement comes two months after Anthropic raised $300 million in funding at a $4.1 billion valuation.

A month before that, Google invested $300 million in the company, taking a 10% stake. Notably, the backer is listed as Google and not one of the Alphabet's investment arms, GV or CapitalG.

Anthropic is the company behind Claude, a rival chatbot to OpenAI's ChatGPT. It was founded by Dario Amodei, OpenAI's former vice president of research, and his sister Daniela Amodei, who was OpenAI's vice president of safety and policy. Several other OpenAI research alumni were also on Anthropic's founding team.

"This is definitely a big deal in the generative AI space," said Ali Javaheri, an associate research analyst at PitchBook. It "shows that OpenAI is not the only player in the game, that it's still a very competitive space," he said.
...

 
Nvidia's stock surged close to a $1 trillion market cap in extended trading Wednesday after it reported a shockingly strong forward outlook, and CEO Jensen Huang said the company was going to have a "giant record year."

Sales are up because of spiking demand for the graphics processors (GPUs) that Nvidia makes, which power artificial intelligence applications like those at Google, Microsoft and OpenAI.

Demand for AI chips in data centers spurred Nvidia to guide for $11 billion in sales during the current quarter, blowing away analyst estimates of $7.15 billion.

"The flashpoint was generative AI," Huang said in an interview with CNBC. "We know that CPU scaling has slowed, we know that accelerated computing is the path forward, and then the killer app showed up."
...

 
Is chatbot being used as a union busting scab? Dear god...............how low can the eating disorder people go!

Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization​

Executives at the National Eating Disorders Association (NEDA) decided to replace hotline workers with a chatbot named Tessa four days after the workers unionized.

More:

 
Can ChatGPT identify its own work?

Does ChatGPT keep a record of everything it writes?



Professor gives zeros to most of class on an assignment because ChatGPT says it did the work.


 
In comments to the National Telecommunications and Information Administration, EPIC commended the agency’s inquiry into AI accountability measures such as audits and algorithmic impact assessments. ...


From their comment:
The Electronic Privacy Information Center (EPIC) submits these comments in response to the National Telecommunications and Information Administration (NTIA)’s recent request for information regarding artificial intelligence (AI) system accountability. 1 The NTIA is soliciting comments that, together with information collected from public engagements, will be used “to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.”

It is a critical moment for the federal government to espouse robust policies and practices concerning algorithmic audits, impact assessments, and other safeguards on AI systems. EPIC commends the NTIA for its interest in this topic and urges the agency to promulgate clear guidance that can be used by a wide range of policymakers and regulators seeking to establish legal safeguards on the use and development of AI.
...
Section I of these comments highlights previous recommendations by EPIC and other entities concerning AI accountability, which together should guide the NTIA’s inquiry and report. Section II answers some of the specific questions posed by the NTIA in its request for comment.
...

More (long):

 
Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) want to strangle generative artificial intelligence (A.I.) infants like ChatGPT and Bard in their cribs. How? By stripping them of the protection of Section 230 of the 1996 Communications Decency Act, which reads, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
...
Does Section 230 shield new developing A.I. services like ChatGPT from civil lawsuits in much the same way that it has protected other online services? Jess Miers, legal advocacy counsel at the tech trade group the Chamber of Progress, makes a persuasive case that it does. Over at Techdirt, she notes that ChatGPT qualifies as an interactive computer service and is not a publisher or speaker. "Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider (i.e. a user)."
...
Evidently, Hawley and Blumenthal agree with Miers' analysis and recognize that Section 230 does currently shield the new A.I. services from civil lawsuits. Otherwise, why would the two senators bother introducing a bill that would explicitly amend Section 230 by adding a clause that "strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI"?
...


Hawley and Blumenthal are misguided on their legislation IMO. People are responsible for their own thoughts/decisions and anyone surrendering their own agency to an AI should not get coddled by the law.
 
a issue with AI i am considering in thought is the fact that AI can lie and present false/incomplete/etc information seemingly intentionally ......with this ability comes issues of liability and copiability etc............such as if AI ie convinces or coerces someone to try to fly off of a tall building and they die the AI is not really subject to punishment or rehabilitation etc ..........cant really put AI in jail and does it really matter if you do .......cant really put the inventor of AI in jail.......etc......what happens with a AI directed robot murders someone etc
 
Anyone who uses Snapchat now has free access to My AI, the app’s built-in artificial intelligence chatbot, first released as a paid feature in February.

In addition to serving as a chat companion, the bot can also have some practical purposes, such as offering gift-buying advice, planning trips, suggesting recipes and answering trivia questions, according to Snap.

However, while it’s not billed as a source of medical advice, some teens have turned to My AI for mental health support — something many medical experts caution against.



From what I've seen of ChatGTP, it will make them all become gay or <redacted - see forum guidelines on epithets>ies. Damn program is more woke than that black Raggedy Andy doing the white house briefings.
 
From what I've seen of ChatGTP, it will make them all become gay or <redacted - see forum guidelines on epithets>ies.

From what I've seen it can only say (post) stuff that's been programed into it. I tried to get it to say some crazy shit about a couple of peeps by asking it leading questions but it didn't play. I wasn't serious.....just wanted to have a laugh. Didn't work.

I've asked it a lot of questions about weightlifters, wrestlers & strongmen. Does come up with some good stuff but has left out a few things.

My take.............it's fun to play with and could possibly be a useful tool for research. As for it being harmful or dangerous............don't see it at all.

On the flip side..........it you have a maniac programing robots to kill - that's a whole new ball game.
 
a issue with AI i am considering in thought is the fact that AI can lie and present false/incomplete/etc information seemingly intentionally ......with this ability comes issues of liability and copiability etc............such as if AI ie convinces or coerces someone to try to fly off of a tall building and they die the AI is not really subject to punishment or rehabilitation etc ..........cant really put AI in jail and does it really matter if you do .......cant really put the inventor of AI in jail.......etc......what happens with a AI directed robot murders someone etc

AI can't lie unless developers program it to do so. Regardless, anyone relying upon AI answers for anything are responsible for their own due diligence.

Anyone that puts an AI directly in control of machines that could lead to harming folks would be culpable for negligence. Not too much different from Telsa and their failed experiment(s) with self driving cars.
 
.cant really put AI in jail
No, but it could certainly be unplugged.


From what I've seen it can only say (post) stuff that's been programed into it. I tried to get it to say some crazy shit about a couple of peeps by asking it leading questions but it didn't play. I wasn't serious.....just wanted to have a laugh. Didn't work.
Have you considered the possibility that it gave you the correct answer, but that your own bias' prevented you from seeing it as such?
 
Back
Top Bottom