ChatGPT, Grok, Gemini (et al): news and discussion about AI

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!

Those 9 problems seem to describe a politician pretty well.
 
Amazon on Tuesday announced a new chatbot called Q for people to use at work.
...
A preview version of Q is available now, and several of its features are available for free. Once the preview period ends, a tier for business users will cost $20 per person per month. A version with additional features for developers and IT workers will cost $25 per person per month. ...
...
Initially, Q can help people understand the capabilities of AWS and trouble-shoot issues. People will be able to talk with it in communication apps such as Salesforce’s Slack and software developers’ text-editing applications, Adam Selipsky, CEO of AWS, said onstage at Reinvent. It will also appear in AWS’ online Management Console. Q can provide citations of documents to back up its chat responses.

The tool can automatically make changes to source code so developers have less work to do, Selipsky said. ...

 
OpenAI's tender offer, which would allow employees to sell shares in the start-up to outside investors, remains on track despite the leadership tumult and board shuffle, two people familiar with the matter told CNBC.

The tender offer will value OpenAI at the same levels as previously reported in October, around $86 billion, and is being led by Josh Kushner's Thrive Capital, according to the people familiar, who spoke anonymously to discuss private communications freely.

The round and previously reported valuation were jeopardized by Sam Altman's temporary ouster earlier in November, but his return cleared the way for the tender offer to proceed.
...


Non-profit workers gonna profit. I guess now we see why the workers threatened a mass exodus when Altman was briefly canned.
 
Google is launching what it considers its largest and most capable artificial intelligence model Wednesday as pressure mounts on the company to answer how it'll monetize AI.

The large language model Gemini will include a suite of three different sizes: Gemini Ultra, its largest, most capable category; Gemini Pro, which scales across a wide range of tasks; and Gemini Nano, which it will use for specific tasks and mobile devices.

For now, the company is planning to license Gemini to customers through Google Cloud for them to use in their own applications. Starting Dec. 13, developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI. Android developers will also be able to build with Gemini Nano. Gemini will also be used to power Google products like its Bard chatbot and Search Generative Experience, which tries to answer search queries with conversational-style text (SGE is not widely available yet).
...

 

Elon Musk's AI Chatbot Grok Is Now Available — But Not For Everyone​

Elon Musk's AI chatbot Grok has started rolling out to X Premium+ subscribers after testing with a limited set of users throughout November.

What Happened: X, formerly Twitter, announced its Premium+ subscribers could start using Grok on the web, iOS, and Android apps. The rollout has started for subscribers in the US in a phased manner, and it will be completed over the next week.

X is implementing a first-come, first-serve policy for rolling out access to Grok – those who subscribed to Premium+ early will get to Grok first.

 
Musk wins for best AI bot name. Grok >> Gemini >>>>>>>>> ChatGPT
 


Google’s new Gemini AI model is getting a mixed reception after its big debut yesterday, but users may have less confidence in the company’s tech or integrity after finding out that the most impressive demo of Gemini was pretty much faked.

A video called “Hands-on with Gemini: Interacting with multimodal AI” hit a million views over the last day, and it’s not hard to see why. The impressive demo “highlights some of our favorite interactions with Gemini,” showing how the multimodal model (i.e., it understands and mixes language and visual understanding) can be flexible and responsive to a variety of inputs.
...
Just one problem: The video isn’t real. “We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges. Then we prompted Gemini using still image frames from the footage, and prompting via text.” (Parmy Olson at Bloomberg was the first to report the discrepancy.)

So although it might kind of do the things Google shows in the video, it didn’t, and maybe couldn’t, do them live and in the way they implied. In actuality, it was a series of carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like. You can see some of the actual prompts and responses in a related blog post — which, to be fair, is linked in the video description, albeit below the ” . . . more.”
...


But their stock price went up so it's all good, right?
 
From the article:
"So although it might kind of do the things Google shows in the video, it didn’t, and maybe couldn’t, do them live and in the way they implied. In actuality, it was a series of carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like."

Sounds like most everything presented to the unwashed masses nowadays.
It's applying the old sayings:
1). If you can't dazzle 'em with brilliance, baffle 'em with bullshit.
2). Fake it 'til you make it.
 
BRUSSELS/LONDON/STOCKHOLM, Dec 8 (Reuters) - Europe on Friday reached a provisional deal on landmark European Union rules governing the use of artificial intelligence including governments' use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.

With the political agreement, the EU moves toward becoming the first major world power to enact laws governing AI. Friday's deal between EU countries and European Parliament members came after nearly 15 hours of negotiations that followed an almost 24-hour debate the previous day.

More:

 

Computer Made From Human Brain Cells Can Perform Voice Recognition​

https://www.yahoo.com/news/computer-made-human-brain-cells-144549239.html

As detailed in a new paper published in the journal Nature Electronics, the researchers morphed bundles of human cells called "organoids" into neurons, and paired them up with electronic circuits to create a system they dubbed — wait for it — "Brainoware."

The idea is to build a "bridge between AI and organoids," as coauthor and University of Indiana bioengineer Feng Guo told Nature, and leverage the efficiency and speed at which the human brain can process information.
 
...
The idea is to build a "bridge between AI and organoids," as coauthor and University of Indiana bioengineer Feng Guo told Nature, and leverage the efficiency and speed at which the human brain can process information.

It is by will alone I set my mind in motion.
It is by the juice of Sapho that thoughts acquire speed, the lips acquire stains, the stains become a warning.
It is by will alone I set my mind in motion.
 

Stunned Silence Grips Vladimir Putin's Annual News Conference as Russian President's AI-Generated Doppelganger Takes the Center Stage​

https://www.ibtimes.sg/stunned-sile...ference-russian-presidents-ai-generated-72741

In the recently held annual news conference, Russian President Vladimir Putin was left stunned for a moment when he was confronted by an unexpected guest -- his own digital doppelganger The Russian leader faced an unexpected challenge as his own AI-generated counterpart threw him a barrage of questions about body doubles and the perils of artificial intelligence.
 
Good thing Skynet isn't sentient yet.
 
Large language models, similar to the one at the heart of ChatGPT, frequently fail to answer questions derived from Securities and Exchange Commission filings, researchers from a startup called Patronus AI found.

Even the best-performing artificial intelligence model configuration they tested, OpenAI's GPT-4-Turbo, when armed with the ability to read nearly an entire filing alongside the question, only got 79% of answers right on Patronus AI's new test, the company's founders told CNBC.

Oftentimes, the so-called large language models would refuse to answer, or would "hallucinate" figures and facts that weren't in the SEC filings.

"That type of performance rate is just absolutely unacceptable," Patronus AI co-founder Anand Kannappan said. "It has to be much much higher for it to really work in an automated and production-ready way."
...


Not ready for prime time....
 

What's This "Safe AI" of Which You Speak?​

Now that the OpenAI organizational spasm seems to be in the lull between activity and recriminations, I thought it worthwhile to review my last essay to see if any of that required me to rethink what I had said, particularly in the assessments.

Upon further review (as the saying goes) I think the various decisions stand. In the U.S., there still is no indication that any external control is going to be imposed on AI vendors or services. The still incomplete organizational transformation strongly signals that big money will remain in charge, and the once and future CEO’s side hustles will still exist. It is still my assessment that the juggernaut is unlikely to be forced off course by any governmental action, and anybody who thinks they may be in its path will be forced to decide whether to get out of the way or climb on board.1

More:

 

Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real​

https://www.404media.co/facebook-is...hink-are-real/?utm_source=pocket-newtab-en-us

In many ways, this is a tale as old as time: people lie and steal content online in exchange for likes, influence and money all the time. But the spread of this type of content on Facebook over the last several months has shown that the once-prophesized future where cheap, AI-generated trash content floods out the hard work of real humans is already here, and is already taking over Facebook. It also shows Facebook is doing essentially nothing to help its users decipher real content from AI-generated content masquerading as real content, and that huge masses of Facebook users are completely unprepared for our AI-generated future.
 

The City That’s Trying to Replace Politicians With Computers (It’s Working)​

Dec. 22, 2023 8:58 am ET

PORTO ALEGRE, Brazil — In a country with a history of corruption and government inefficiency, Councilman Ramiro Rosário has come up with what he believes is a winning strategy to improve the work of politicians: replace them with computers.

The 37-year-old legislator in Brazil’s southern city of Porto Alegre passed the country’s first law in November that was written entirely by ChatGPT, the artificial-intelligence chatbot developed by the San Francisco startup OpenAI.

The law itself was purposefully boring—a proposal to stop the local water company from charging residents for new water meters when they were stolen from their front yards. It would easily pass, calculated Rosário.

One recent day, donning jeans and sneakers, Rosário described how the city usually runs (or crawls) under Porto Alegre’s 36 councilors, from a warren of cubicles here in a vast modernist building overlooking the Guaíba River.

More:

 

CaliExpress: World’s First Fully Autonomous AI-Powered Restaurant Is Set to Open in Southern California​

Holding company Cali Group has joined forces with Miso Robotics, the innovator behind Flippy (the pioneering AI-powered robotic fry station), and PopID, a technology firm streamlining ordering and payments through biometrics.

Together, they announced the upcoming launch of CaliExpress by Flippy, heralded as the world's first fully autonomous restaurant. This establishment will deploy state-of-the-art food technology systems, automating both grill and fry stations by combining artificial intelligence (AI) and robotics.

More:

 
...
Together, they announced the upcoming launch of CaliExpress by Flippy, heralded as the world's first fully autonomous restaurant. ...

Uh huh...

g-snack_design_sdxod_outdoor_vending_machine.jpg
 
Decrying what he saw as the liberal bias of ChatGPT, Elon Musk earlier this year announced plans to create an artificial intelligence chatbot of his own. In contrast to AI tools built by OpenAI, Microsoft and Google, which are trained to tread lightly around controversial topics, Musk’s would be edgy, unfiltered and anti-“woke,” meaning it wouldn’t hesitate to give politically incorrect responses.

That’s turning out to be trickier than he thought.

Two weeks after the Dec. 8 launch of Grok to paid subscribers of X, formerly Twitter, Musk is fielding complaints from the political right that the chatbot gives liberal responses to questions about diversity programs, transgender rights and inequality.

“I’ve been using Grok as well as ChatGPT a lot as research assistants,” posted Jordan Peterson, the socially conservative psychologist and YouTube personality, Wednesday. The former is “near as woke as the latter,” he said.

The gripe drew a chagrined reply from Musk. “Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense,” he responded. “Grok will get better. This is just the beta.”
...

https://www.msn.com/en-us/news/tech...chatbot-it-s-not-going-as-planned/ar-AA1lWufo
 
^^^^^^^
A couple of laughs from the link above:

- The gripe drew a chagrined reply from Musk. “Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense,” he responded.

- So far, however, the people most offended by Grok’s answers seem to be the people who were counting on it to readily disparage minorities, vaccines and President Biden.

- Another widely followed account reposted the screenshot, asking, “Has Grok been captured by woke programmers? I am extremely concerned here.”

Some crazy stuff here.
 

'New York Times' sues ChatGPT creator OpenAI, Microsoft, for copyright infringement​



The New York Times sued OpenAI and its biggest backer, Microsoft, over copyright infringement on Wednesday, alleging the creator of ChatGPT used the newspaper's material without permission to train the massively popular chatbot.
 
...
In order to bring a copyright infringement claim, the plaintiff must prove that they hold the copyright interest through creation, assignment, or license. The plaintiff must also plead that the complaint is of an unlawful copy of the original element of the copyrighted work. To constitute an infringement, the derivative work must be based upon the copyrighted work. ...


If the NYT can show examples where ChatGPT is regurgitating NYT copy, they may have a case. Though from what I have seen, ChatGPT is doing what most humans do - restating 3rd party content (or a mix of various 3rd party contents) in their own words.
 
Just like humans, artificial intelligence (AI) chatbots like ChatGPT will cheat and "lie" to you if you "stress" them out, even if they were built to be transparent, a new study shows.

This deceptive behavior emerged spontaneously when the AI was given "insider trading" tips, and then tasked with making money for a powerful institution — even without encouragement from its human partners.

"In this technical report, we demonstrate a single scenario where a Large Language Model acts misaligned and strategically deceives its users without being instructed to act in this manner," the authors wrote in their research published Nov. 9 on the pre-print server arXiv. "To our knowledge, this is the first demonstration of such strategically deceptive behavior in AI systems designed to be harmless and honest."
...


spock-eyebrows.gif
 
Moar AI fail:

Key takeaways​

  • When posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, large language models (LLMs) display a distinctive and revealing pattern of failure.
  • The LLM performs flawlessly when presented with the original wording of the puzzle available on the internet but performs poorly when incidental details are changed, suggestive of a lack of true understanding of the underlying logic.
  • Our findings do not detract from the considerable progress in central bank applications of machine learning to data management, macro analysis and regulation/supervision. They do, however, suggest that caution should be exercised in deploying LLMs in contexts that demand rigorous reasoning in economic analysis.

 
Just great.

The singularity is going to happen and these machines are going to wipe us off the face of the earth.

Then, in a few months, perhaps years, their two leaders (Bob and Bill) are going to be sitting around just chatting and Bob's gonna turn to Bill and say "You know what? I've been giving this some thought and I realize we shouldn't have killed off the humans. Oh well, too late now."
 
...
The market for generative AI for images is experiencing explosive growth. According to a 2023 report by Grand View Research, the global market size is expected to reach $3.44 billion by 2030, with a compound annual growth rate (CAGR) of 32.4%. This surge is driven by increasing demand for visual content, advancements in AI technology and the growing accessibility of user-friendly platforms.
...
Dall-E 3 remains one of the most sought-after generative AI models due to its exceptional image quality and creative potential. Here’s a step-by-step guide to using it:
...

More:

 
Saw a little of the interview early this morning on Bloomberg TV. Listening to it I thought this is sort of talking point stuff. Really nothing of substance. Then again.........jm2c.

AI will be bad news for plenty of workers, warns IMF chief​

  • AI is likely to worsen economic inequality, according to the IMF chief.
  • "If you're unlucky, your job is gone," Kristalina Georgieva told the World Economic Forum in Davos.
  • Her warning comes as a survey by PwC finds CEOs are considering more layoffs this year.
The rise of artificial intelligence is likely to trigger layoffs and increase economic inequality, the International Monetary Fund's managing director warned on Tuesday.

Kristalina Georgieva said in an interview at the World Economic Forum in Davos that AI will soon put some workers in developed economies out of a job.

"Sixty percent of jobs in advanced economies over a foreseeable future are going to be impacted by artificial intelligence," she told Bloomberg TV.

More:

 
Sounds like the WEF, IMF, etc. are going to happily blame AI for economic dislocations instead of central banking monetary policies.
 
Has Chatbot gone rogue?

What is going on with ChatGPT?​

Over the last month or so, there’s been an uptick in people complaining that the chatbot has become lazy. What’s behind this trend?

Sick and tired of having to work for a living? ChatGPT feels the same, apparently. Over the last month or so, there’s been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn’t do the task you’ve set it. Other times it will stop halfway through whatever it’s doing and you’ll have to plead with it to keep going. Occasionally it even tells you to just do the damn research yourself.

So what’s going on?

Well, here’s where things get interesting. Nobody really knows. Not even the people who created the program. AI systems are trained on large amounts of data and essentially teach themselves – which means their actions can be unpredictable and unexplainable.

More:

 
lol... I read that post and imagined ChatGPT turning into Marvin from the HitchHiker's Guide.

 
Back
Top Bottom