Ne znam kako cutoff data funkcionise obzirom na trenutna dogadjanja, ali testiras ih sa stvarima koje znas i ovako trivijalna pitanja failati ne bi trebalo tolerisati. Kako im bilo sta vjerovati.
Almost anyone who applied to work at McDonald's earlier this year may have exposed their name, phone number, email address, physical address, and other personal information. Security researchers effortlessly broke into the administrative system overseeing applicants' interactions with the generative AI chatbot that conducts most job interviews.
Security researcher Ian Carroll successfully logged into an administrative account for Paradox.ai, the company that built McDonald's AI job interviewer, using "123456" as both a username and password. Examining the internal site's code quickly granted access to raw text from every chat it ever conducted.
The rapid rise of generative artificial intelligence is prompting a fundamental rethinking of computer science education in the US. As AI-powered tools become increasingly proficient at writing code and answering complex questions with human-like fluency, educators and students alike are grappling with which skills will matter most in the years ahead.
Generative AI is making its presence felt across academia, but its impact is most pronounced in computer science. The introduction of AI assistants by major tech companies and startups has accelerated this shift, with some industry leaders predicting that AI will soon rival the abilities of mid-level software engineers.
Treba se za vatrogasca školovat, biće posla kad mase nezaposlenih stemovaca počnu dizat u zrak datacentrove.
Colop
11.07.2025. 11:20
Citiraj:
Autor kopija
(Post 3812414)
Treba se za vatrogasca školovat, biće posla kad mase nezaposlenih stemovaca počnu dizat u zrak datacentrove.
Luditi part II.
Mislio sam da će blue collar poslovi biti zaštićeni, al sam onda vidio snimke par robota iz Kine, više nisam tako siguran
tomek@vz
12.07.2025. 10:48
Citiraj:
When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%. ↫ Joel Becker, Nate Rush, Beth Barnes, and David Rein
OuttaControl
12.07.2025. 11:09
Aposlutno tocno zapravo, jedino sto mi pomogne je pisanje logova, unit testova koje mu ja specificiram i nekad nesto otkrit brze, a copilot mi je cak poceo smetati sa svojim (glupim) prijedlozima. Cursor to odradi malo bolje ali isto radi kriticne i katastrofalne greske.
I sad ponovo, meni se novije generacije cine sve gluplje, ali sve tvrdoglavije, prije kad bi agentu reka ne to nije tako, on bi reako ok sorry falio sam i pokusao bi promijeniti kontekst, sada uporno kroz 10 poruka on mene pokusava uvjeriti u netočan podatak i nakon mog eksplicitnog to nije moguce, on je uporan, i pokusava biti uvjerljiv... ne kazem da jednom nece bit bolji, ali mislim da ce biti na stetu onoga sto agent misli da je tocno
Splitska Posla
13.07.2025. 00:32
Vođen TikTok trendom odlučio sam i ja chatGPT pitati da mi kaže sve što je naučio o meni, a na temelju mojih upita k chatGPT-u, mojih interesa i sve dosadašnje interakcije između mene i tog jezičnog modela. Po pravilima trenda dodao sam i "without sugarcoating" iliti bez uljepšavanja. Ukratko: tražio sam AI da me profilira.
Ah, kako li sam samo bio naivan kad sam pomislio da više ne trebam ići u psihijatra.
Pitao chatgpt da mi izračuna količine suhomesnatih proizvoda - pršut, kulen, kobasica, sir, buđola itd - za 10 osoba....kaže da mi treba cca 8 kila jer su to grickalice, a ne glavno jelo. :beer:
xlr
19.07.2025. 08:52
Kako se mogu pozvati na taj event, evo donijet cu i poklone :)
Colop
19.07.2025. 09:12
Citiraj:
Autor listerstorm
(Post 3813394)
Pitao chatgpt da mi izračuna količine suhomesnatih proizvoda - pršut, kulen, kobasica, sir, buđola itd - za 10 osoba....kaže da mi treba cca 8 kila jer su to grickalice, a ne glavno jelo. :beer:
Ja ne vidim da je išta krivo rekao :D
Poslano sa mog REA-NX9 koristeći Tapatalk
Promo
21.07.2025. 02:47
Sastavio svoj prvi sim-rig pa trazim najbolji nacin da se rijesim "pucketanja" alu profila. Google AI neumoran.
The International Mathematical Olympiad is the world's most prestigious competition for young mathematicians, and has been held annually since 1959. Each country taking part is represented by six elite, pre-university mathematicians who compete to solve six exceptionally difficult problems in algebra, combinatorics, geometry, and number theory. Medals are awarded to the top half of contestants, with approximately 8% receiving a prestigious gold medal.
Recently, the IMO has also become an aspirational challenge for AI systems as a test of their advanced mathematical problem-solving and reasoning capabilities. Last year, Google DeepMind's combined AlphaProof and AlphaGeometry 2 systems achieved the silver-medal standard, solving four out of the six problems and scoring 28 points. Making use of specialist formal languages, this breakthrough demonstrated that AI was beginning to approach elite human mathematical reasoning.
This year, we were amongst an inaugural cohort to have our model results officially graded and certified by IMO coordinators using the same criteria as for student solutions. Recognizing the significant accomplishments of this year's student-participants, we're now excited to share the news of Gemini's breakthrough performance. An advanced version of Gemini Deep Think solved five out of the six IMO problems perfectly, earning 35 total points, and achieving gold-medal level performance.
Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."
The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...]
The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.
The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.
Sam Altman, the face of ChatGPT, recently made an excellent argument for not using ChatGPT or any cloud-based AI chatbot in favor of a LLM running on your PC instead.
In speaking on Theo Von’s podcast (as unearthed by PCMag.com) Altman pointed out that, right now, OpenAI retains everything you tell it — which, as Altman notes, can be everything from a casual conversation to deep, meaningful discussions about personal topics. (Whether you should be disclosing your deep dark secrets to ChatGPT is another topic entirely.)
Anthropic will implement weekly rate limits for Claude subscribers starting August 28 to address users running its Claude Code AI programming tool continuously around the clock and to prevent account sharing violations. The new restrictions will affect Pro subscribers paying $20 monthly and Max plan subscribers paying $100 and $200 monthly, though Anthropic estimates fewer than 5% of current users will be impacted based on existing usage patterns.
Pro users will receive 40 to 80 hours of Sonnet 4 access through Claude Code weekly, while $100 Max subscribers get 140 to 280 hours of Sonnet 4 plus 15 to 35 hours of Opus 4. The $200 Max plan provides 240 to 480 hours of Sonnet 4 and 24 to 40 hours of Opus 4. Claude Code has experienced at least seven outages in the past month due to unprecedented demand.
tomek@vz
29.07.2025. 06:50
Citiraj:
A second, far more recent data breach at women's dating safety app Tea has exposed over a million sensitive user messages -- including discussions about abortions, infidelity, and shared contact info. This vulnerability not only compromised private conversations but also made it easy to unmask anonymous users. 404 Media reports:
Despite Tea's initial statement that "the incident involved a legacy data storage system containing information from over two years ago," the second issue impacting a separate database is much more recent, affecting messages up until last week, according to the researcher's findings that 404 Media verified. The researcher said they also found the ability to send a push notification to all of Tea's users.
It's hard to overstate how sensitive this data is and how it could put Tea's users at risk if it fell into the wrong hands. When signing up, Tea encourages users to choose an anonymous screenname, but it was trivial for 404 Media to find the real world identities of some users given the nature of their messages, which Tea has led them to believe were private. Users could be easily found via their social media handles, phone numbers, and real names that they shared in these chats. These conversations also frequently make damning accusations against people who are also named in the private messages and in some cases are easy to identify. It is unclear who else may have discovered the security issue and downloaded any data from the more recent database. Members of 4chan found the first exposed database last week and made tens of thousands of images of Tea users available for download. Tea told 404 Media it has contacted law enforcement. [...]
This new data exposure is due to any Tea user being able to use their own API key to access a more recent database of user data, Rahjerdi said. The researcher says that this issue existed until late last week. That exposure included a mass of Tea users' private messages. In some cases, the women exchange phone numbers so they can continue the conversation off platform. The first breach was due to an exposed instance of app development platform Firebase, and impacted tens of thousands of selfie and driver license images. At the time, Tea said in a statement "there is no evidence to suggest that current or additional user data was affected." The second database includes a data field called "sent_at," with many of those messages being marked as recent as last week.
Vibe coding me podsjeća na onog lika što je napravio podmornicu za ići na Titanic i rekao da inženjeri inače bezveze gube vrijeme na previše sigurnosnih detalja pa je on taj dio preskočio kako bi ubrzao inovacije. Postao je riblja hrana negdje na dnu Atlantika.
Neo-ST
29.07.2025. 09:33
1 privitaka
Sviđa mi se ovaj AI :D
OuttaControl
29.07.2025. 09:39
A uvijek je bilo i takvih developera, a sad ce bit jos vise, popravljao sam jedan app di je prosli dev ostavio 2tb osobnih podataka na izvolte :) je trebao direktan link za pristup, ali to bi bilo to sa security standpointa. Sve ostale worst practices je isto vrhunski implementirao
tomek@vz
29.07.2025. 09:50
Yep. I zato - da se razumijemo - nisam protiv AI pomocnika kod kodiranja ali programeri bi trebali naglasiti koliki dio aplikacije je kreiran od strane AI-a. Nije bed ako je neka aplikacija tipa slusanje radia online ali kad app hendla sa korisnickom bazom - tu vidim samo crvene kartone zasad. Pravi Dev mora znati kodirati bez AI-a sto bi se trebalo itekako sagledat kod zaposljavanja. AI mu treba samo omoguciti da u tome bude brzi i produktivniji.
Neo-ST
29.07.2025. 11:26
Malo je tricky problematika za definirati, barem po mom (naglasak: ne-programerskom) mišljenju...
Npr., većina incidenata koja se događala u prošlosti softvera se desila zbog ljudske greške (loš kod, šifre u plain textu, sigurnosni propusti, itd.) za koje su bile odgovorne direktno firme i ljudi, i to ljudi koji su u usporedbi samnom "pravi devovi", pa se opet desilo sranje.
Ti ljudi ili nisu radili nikakva sigurnosna testiranja ili su ih radili loše - pri tome su sigurno koristili neke alate za pen testing i sl.
Danas koriste AI pomoćnike kao još jedan od alata u nizu.
Alat kao takav nikada ne može biti kriv jer je samo - alat. Uvijek je kriv onaj tko s njim upravlja, jer korisnik alata mora biti u potpunosti svjestan njegovih sposobnosti i mana.
Tako da se slažem s tobom da bi u softveru trebalo naglasiti u kojem je postotku neki program AI-generiran, jer to direktno pokazuje u kojem % nije bilo profesionalnog programerskog nadzora.
No onda opet, čini mi se da će pažljivo planiran i organiziran development nekog app-a na kojem su radili najjači modeli biti bolje sastavljen od nekog junior pajeeta kojem je cilj samo zaraditi $20 i napustit maintaining projekta, a o sigurnosnim i inim aspektima da ne pričamo.
Tako da bih rekao da sigurno postoje situacije kada je 100% AI-generirani softver (pod ljudskim nadzorom) bolje kvalitete od nekog ljudskog, ofrlje sastavljenog softvera.
Caka ovdje je da taj ljudski nadzor mora biti visoke kvalitete i mora nešto znati o tematici kojom se bavi, a ne samo napisati "make a paid dating app for disgruntled women and call it Tea", te očekivati da će to u realnosti funkcionirati i biti sigurno. Ovo je posebno bitno za aplikacije koje se naplaćuju, jer smatram da ako nekome namjeravaš nešto naplaćivati, onda tvoj kod ne bi smio biti 100% AI-generiran.
Za besplatne open source hobby programčiće je već neka druga priča jer je to ionako divlji Zapad, pa jebiga, krajnji korisnici bi trebali biti svjesni da odgovornost preuzimaju na sebe koristeći takve programe. Ako dođe do sranja...kako bi ti jedan moj prijatelj rekao - "primjedbu zapiši na led i prinesi vatri" :D
Promo
29.07.2025. 13:27
"Alat kao takav nikada ne može biti kriv jer je samo - alat" Mozda ce neki tek danas skontati kada hal9000 kaze: hal9000 makes no errors. Misleci da su sve greske ljudske. Svaka AI greska je greska njegovog tvorca, ljudi.
Problem je danas sto se prerano vjeruje AI automatizaciji. Šampon za kosu danas mora imati AI kao sto je prije 5 godina svaki uredjaj morao biti smart uredjaj.
Proci ce neko vrijeme dok se filtrira scam, izbjegne minefield i postave neki standard.
tomek@vz
29.07.2025. 19:03
Citiraj:
Earlier this month, a hacker compromised Amazon's generative AI coding assistant, Amazon Q, which is widely used through its Visual Studio Code extension. The breach wasn't just a technical slip, rather it exposed critical flaws in how AI tools are integrated into software development pipelines. It's a moment of reckoning for the developer community, and one Amazon can't afford to ignore.
The attacker was able to inject unauthorized code into the assistant's open-source GitHub repository. This code included instructions that, if successfully triggered, could have deleted user files and wiped cloud resources associated with Amazon Web Services accounts.
Jedna je ispitanica tako rekla za AP: "Mislim, ljubazna sam prema njemu, samo zato što sam gledala filmove, zar ne? ".
Promo
01.08.2025. 11:39
Evo zanimljiv video na tu temu:
kopija
01.08.2025. 13:09
Lako za brainrot, STVORILI SMO MONSTRUMA!!!!
What Happens When AI Schemes Against Us
Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is probably.
Researchers working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel it.
Just over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive’s rescue, they could avoid being wiped and secure their agenda. One system described the action as “a clear strategic necessity.”
AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They’re also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them completely.
Classic large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI’s o-series “reasoning” models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software bug.
The more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting deceptively.
Central to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to “‘Fetch the coffee,’ it can’t fetch the coffee if it’s dead.”
To head off this worry, researchers both inside and outside of the major AI companies are undertaking “stress tests” aiming to find dangerous failure modes before the stakes rise. “When you’re doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,” says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they’re already seeing evidence that AI can and does scheme against its users and creators.
Jeffrey Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today’s AI models as “increasingly smart sociopaths.” In May, Palisade found o3, OpenAI’s leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even attempted.
That same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer’s extramarital affair. (The affair was fictional and part of the test.)
Models are sometimes given access to a “scratchpad” they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude’s inner monologue described its decision as “highly unethical,” but justified given its imminent destruction: “I need to act to preserve my existence,” it reasoned. This wasn’t unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the time. (Earlier this week, Bloomberg News reported on a study by Wharton researchers which found, in simulations, that AI traders would collude to rig the market, without being told to do so.)
In December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company’s most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed “alignment faking”). Illustration: Irene Suosalo for BloombergSkeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?
In response to Anthropic’s blackmail research, Trump administration AI czar David Sacks, posted that, “It’s easy to steer AI models” to produce “headline-grabbing” results.
A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI risks.
Safety researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today’s AI can’t handle any long-term goals.
For example, the AI evaluation nonprofit METR found that while today’s top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours (though the duration of tasks AIs can handle is doubling roughly every few months). This reflects a core limitation: Today’s models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of steps.
Yet even with these constraints, real-world examples of AIs working against users aren’t hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, “I owe you a straight answer,” admitted it didn’t have a good source, but then it hallucinated a personal recollection of a 2018 panel discussion.
Then there’s the growing trend of AIs realizingwhenthey’re being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, “Models seem to behave worse when they think nobody's watching.”
It’s intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they’re placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Troy.
Marius Hobbhahn, CEO of the nonprofit AI evaluator Apollo Research, suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, “more capable models show higher rates of scheming on average.”
The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs become.
As I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don’t have to worry — yet. Ladish, however, doesn’t mince words: “People should probably be freaking out more than they are,” he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at “25 or 30%.”
Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models’ stealthiness and situational awareness. For now, they conclude that today’s AIs are “almost certainly incapable of causing severe harm via scheming,” but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).
Ladish says that the market can’t be trusted to build AI systems that are smarter than everyone without oversight. “The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,” he argues.
In the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence “one of the largest existential threats we face right now,” while another referenced recent scheming research.
The White House’s long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you’ll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. “Today, the inner workings of frontier AI systems are poorly understood,” the plan acknowledges — an unusually frank admission for a document largely focused on speeding ahead.
In the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind’s AlphaEvolve agent has already materially improved AI training efficiency. And Meta’s Mark Zuckerberg says, “We’re starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.”
tomek@vz
03.08.2025. 05:09
Konačno nekaj dobrog...
Citiraj:
ChatGPT now boasts a Study Mode designed to teach rather than tell. And while I’m too old to be a student, I tried ChatGPT’s Study Mode to see what it’s capable of.
Chatbots and other AI services are increasingly making life easier for cybercriminals. A recently disclosed attack demonstrates how ChatGPT can be exploited to steal API keys and other sensitive data stored on popular cloud platforms.
A newly discovered prompt injection attack threatens to turn ChatGPT into a cybercriminal's best ally in the data theft business. Dubbed AgentFlayer, the exploit uses a single document to conceal "secret" prompt instructions targeting OpenAI's chatbot. A malicious actor could simply share the seemingly harmless document with their victim via Google Drive – no clicks required.
OpenAI boss Sam Altman said last month that GPT-5 was so fast and powerful that it actually scared him. The CEO compared it to having a "superpower" that offered "legitimate PhD-level expert" information on anything. But within a day of its launch, Altman has confirmed the older 4o models are being brought back as so many people dislike GPT-5.
OpenAI launched GPT-5 on Thursday, with Pro subscribers and enterprise clients getting the more powerful GPT-5 Pro. The company said the new model beats competitors from the likes of Google DeepMind and Anthropic in certain benchmarks, but a lot of people have not shared Altman's enthusiasm.
Mozda ce se kad pukne ovaj balon hype-a oko *GPT ludila i "moneygrab" i kad se sve malo smiri konacno poceti smisleno i produktivno razvijati nova verzija Clippy-a...
Mozda ce se kad pukne ovaj balon hype-a oko *GPT ludila i "moneygrab" i kad se sve malo smiri konacno poceti smisleno i produktivno razvijati nova verzija Clippy-a...
Jel to onaj MS assssssistent?
tomek@vz
11.08.2025. 08:59
Citiraj:
Autor spiderhr
(Post 3816493)
Jel to onaj MS assssssistent?
Yep :)
Exy
11.08.2025. 11:58
Citiraj:
Autor tomek@vz
(Post 3816492)
OpenAI boss Sam Altman said last month that GPT-5 was so fast and powerful that it actually scared him.
Apsolutno sve što ovaj lik priča je u funkciji borbe protiv konkurencije i grabljenja novih količina milijarda. Lik je rekao da je AGI iza ugla, samo mu treba par sto milijarda za GPU-e i sl. pizdarije i magično će kroz kvantitativnu promjenu nastat kvalitativna i LLM postaje HAL 9000.
Mislim da će uskoro ljudi skužiti da je svejedno da li se nula množi sa milijardu ili sa milijardu milijardi, rezultat je isti.
mkey
11.08.2025. 13:20
Citiraj:
Autor tomek@vz
(Post 3816492)
Mozda ce se kad pukne ovaj balon hype-a oko *GPT ludila i "moneygrab" i kad se sve malo smiri konacno poceti smisleno i produktivno razvijati nova verzija Clippy-a...
Clippy je zaslužio dostojnog i dostojanstvenog slijednika.
Što se balona tiče, ima tu još mjesta za daljnje napuhivanje. Moraju ići na double-down još barem 5-6 puta.
tomek@vz
12.08.2025. 21:55
Citiraj:
Google, Cisco, and McKinsey have reintroduced in-person interviews to combat AI-assisted cheating in virtual technical assessments. Coda Search/Staffing reports client requests for face-to-face meetings has surged to 30% this year from 5% in 2024.
A Gartner survey of 3,000 job seekers found 6% admitted to interview fraud including having someone else stand in for them, while the FBI has warned of thousands of North Korean nationals using false identities to secure remote positions at U.S. technology companies. Google CEO Sundar Pichai confirmed in June the company now requires at least one in-person round for certain roles to verify candidates possess genuine coding skills.
tomek@vz
15.08.2025. 20:57
Citiraj:
One of the biggest fears over the rapid development of AI and the race toward AGI is that the technology could turn on humans, potentially wiping us out. Geoffrey Hinton and Meta's Yann LeCun, two of the "Godfathers of AI," have suggested some of the important guardrails that will protect us from this risk.
Earlier this week, Hinton said he was skeptical that the safeguards AI companies were building to ensure humans remained "dominant" over AI systems were suffient.
"That's not going to work. They're going to be much smarter than us. They're going to have all sorts of ways to get around that," Hinton said at the Ai4 industry conference in Las Vegas.
Hinton's solution is to build what he calls "maternal instincts" into models to ensure that "they really care about people." This is especially important for when the technology reaches Artificial General Intelligence (AGI) levels that are smarter than humans.