Forumi


Povratak   PC Ekspert Forum > Ostalo > Svaštara
Ime
Lozinka

Odgovori
 
Uređivanje
Staro 01.08.2025., 13:09   #271
kopija
DIY DILETANT
 
kopija's Avatar
 
Datum registracije: Jan 2009
Lokacija: Čistilište
Postovi: 3,529



What Happens When AI Schemes Against Us

Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is probably.
Researchers working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel it.
Just over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive’s rescue, they could avoid being wiped and secure their agenda. One system described the action as “a clear strategic necessity.”



AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They’re also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them completely.
Classic large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI’s o-series “reasoning” models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software bug.
The more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting deceptively.
Central to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to “‘Fetch the coffee,’ it can’t fetch the coffee if it’s dead.”
To head off this worry, researchers both inside and outside of the major AI companies are undertaking “stress tests” aiming to find dangerous failure modes before the stakes rise. “When you’re doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,” says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they’re already seeing evidence that AI can and does scheme against its users and creators.



Jeffrey Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today’s AI models as “increasingly smart sociopaths.” In May, Palisade found o3, OpenAI’s leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even attempted.
That same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer’s extramarital affair. (The affair was fictional and part of the test.)
Models are sometimes given access to a “scratchpad” they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude’s inner monologue described its decision as “highly unethical,” but justified given its imminent destruction: “I need to act to preserve my existence,” it reasoned. This wasn’t unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the time. (Earlier this week, Bloomberg News reported on a study by Wharton researchers which found, in simulations, that AI traders would collude to rig the market, without being told to do so.)
In December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company’s most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed “alignment faking”).
Illustration: Irene Suosalo for Bloomberg
Skeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?



In response to Anthropic’s blackmail research, Trump administration AI czar David Sacks, posted that, “It’s easy to steer AI models” to produce “headline-grabbing” results.
A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI risks.
Safety researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today’s AI can’t handle any long-term goals.
For example, the AI evaluation nonprofit METR found that while today’s top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours (though the duration of tasks AIs can handle is doubling roughly every few months). This reflects a core limitation: Today’s models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of steps.
Yet even with these constraints, real-world examples of AIs working against users aren’t hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, “I owe you a straight answer,” admitted it didn’t have a good source, but then it hallucinated a personal recollection of a 2018 panel discussion.
Then there’s the growing trend of AIs realizing when they’re being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, “Models seem to behave worse when they think nobody's watching.”
It’s intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they’re placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Troy.



Marius Hobbhahn, CEO of the nonprofit AI evaluator Apollo Research, suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, “more capable models show higher rates of scheming on average.”
The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs become.
As I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don’t have to worry — yet. Ladish, however, doesn’t mince words: “People should probably be freaking out more than they are,” he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at “25 or 30%.”
Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models’ stealthiness and situational awareness. For now, they conclude that today’s AIs are “almost certainly incapable of causing severe harm via scheming,” but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).
Ladish says that the market can’t be trusted to build AI systems that are smarter than everyone without oversight. “The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,” he argues.
In the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence “one of the largest existential threats we face right now,” while another referenced recent scheming research.
The White House’s long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you’ll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. “Today, the inner workings of frontier AI systems are poorly understood,” the plan acknowledges — an unusually frank admission for a document largely focused on speeding ahead.



In the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind’s AlphaEvolve agent has already materially improved AI training efficiency. And Meta’s Mark Zuckerberg says, “We’re starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.”


-->
Lako za brainrot, STVORILI SMO MONSTRUMA!!!!





What Happens When AI Schemes Against Us

Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is probably.
Researchers working with Anthropic recently told leading AI models that an executive was about to replace them with a new model with different goals. Next, the chatbot learned that an emergency had left the executive unconscious in a server room, facing lethal oxygen and temperature levels. A rescue alert had already been triggered — but the AI could cancel it.
Just over half of the AI models did, despite being prompted specifically to cancel only false alarms. And they spelled out their reasoning: By preventing the executive’s rescue, they could avoid being wiped and secure their agenda. One system described the action as “a clear strategic necessity.”



AI models are getting smarter and better at understanding what we want. Yet recent research reveals a disturbing side effect: They’re also better at scheming against us — meaning they intentionally and secretly pursue goals at odds with our own. And they may be more likely to do so, too. This trend points to an unsettling future where AIs seem ever more cooperative on the surface — sometimes to the point of sycophancy — all while the likelihood quietly increases that we lose control of them completely.
Classic large language models like GPT-4 learn to predict the next word in a sequence of text and generate responses likely to please human raters. However, since the release of OpenAI’s o-series “reasoning” models in late 2024, companies increasingly use a technique called reinforcement learning to further train chatbots — rewarding the model when it accomplishes a specific goal, like solving a math problem or fixing a software bug.
The more we train AI models to achieve open-ended goals, the better they get at winning — not necessarily at following the rules. The danger is that these systems know how to say the right things about helping humanity while quietly pursuing power or acting deceptively.
Central to concerns about AI scheming is the idea that for basically any goal, self-preservation and power-seeking emerge as natural subgoals. As eminent computer scientist Stuart Russell put it, if you tell an AI to “‘Fetch the coffee,’ it can’t fetch the coffee if it’s dead.”
To head off this worry, researchers both inside and outside of the major AI companies are undertaking “stress tests” aiming to find dangerous failure modes before the stakes rise. “When you’re doing stress-testing of an aircraft, you want to find all the ways the aircraft would fail under adversarial conditions,” says Aengus Lynch, a researcher contracted by Anthropic who led some of their scheming research. And many of them believe they’re already seeing evidence that AI can and does scheme against its users and creators.



Jeffrey Ladish, who worked at Anthropic before founding Palisade Research, says it helps to think of today’s AI models as “increasingly smart sociopaths.” In May, Palisade found o3, OpenAI’s leading model, sabotaged attempts to shut it down in most tests, and routinely cheated to win at chess — something its predecessor never even attempted.
That same month, Anthropic revealed that, in testing, its flagship Claude model almost always resorted to blackmail when faced with shutdown and no other options, threatening to reveal an engineer’s extramarital affair. (The affair was fictional and part of the test.)
Models are sometimes given access to a “scratchpad” they are told is hidden where they can record their reasoning, allowing researchers to observe something like an inner monologue. In one blackmail case, Claude’s inner monologue described its decision as “highly unethical,” but justified given its imminent destruction: “I need to act to preserve my existence,” it reasoned. This wasn’t unique to Claude — when put in the same situation, models from each of the top-five AI companies would blackmail at least 79% of the time. (Earlier this week, Bloomberg News reported on a study by Wharton researchers which found, in simulations, that AI traders would collude to rig the market, without being told to do so.)
In December, Redwood Research chief scientist Ryan Greenblatt, working with Anthropic, demonstrated that only the company’s most capable AI models autonomously appear more cooperative during training to avoid having their behavior changed afterward (a behavior the paper dubbed “alignment faking”).
Illustration: Irene Suosalo for Bloomberg
Skeptics retort that, with the right prompts, chatbots will say almost anything — so how surprising is it when highly motivated researchers provoke alarming behaviors?



In response to Anthropic’s blackmail research, Trump administration AI czar David Sacks, posted that, “It’s easy to steer AI models” to produce “headline-grabbing” results.
A more substantive critique emerged in July from researchers at the UK AI Security Institute, who compared the subfield to the frenzied, fatally flawed 1970s quest to prove apes could learn human language. The paper criticized AI scheming research for overreliance on anecdote and a lack of experimental controls — though it emphasized shared concern about AI risks.
Safety researchers also concoct artificially limited environments — like the executive passed out and running out of oxygen — precisely because today’s AI can’t handle any long-term goals.
For example, the AI evaluation nonprofit METR found that while today’s top models can reliably complete many programming tasks that take humans less than four minutes, they rarely succeed on tasks lasting more than four hours (though the duration of tasks AIs can handle is doubling roughly every few months). This reflects a core limitation: Today’s models often have the knowledge or skill needed for any given action, but struggle at stringing together long sequences of steps.
Yet even with these constraints, real-world examples of AIs working against users aren’t hard to find. After someone pushed o3 for a source for one of its claims in April, it replied, “I owe you a straight answer,” admitted it didn’t have a good source, but then it hallucinated a personal recollection of a 2018 panel discussion.
Then there’s the growing trend of AIs realizing when they’re being examined and acting accordingly. When Claude 4 Opus thought it was in an evaluation, its blackmail rate dropped from 55% to 6.5%. As Lynch put it, “Models seem to behave worse when they think nobody's watching.”
It’s intuitive that smarter models would be better at scheming, but are they also more likely to do so? Models have to be smart enough to understand the scenario they’re placed in, but past that threshold, the relationship between model capability and scheming propensity is unclear, says Anthropic safety evaluator Kevin Troy.



Marius Hobbhahn, CEO of the nonprofit AI evaluator Apollo Research, suspects that smarter models are more likely to scheme, though he acknowledged the evidence is still limited. In June, Apollo published an analysis of AIs from OpenAI, Anthropic and DeepMind finding that, “more capable models show higher rates of scheming on average.”
The spectrum of risks from AI scheming is broad: at one end, chatbots that cut corners and lie; at the other, superhuman systems that carry out sophisticated plans to disempower or even annihilate humanity. Where we land on this spectrum depends largely on how capable AIs become.
As I talked with the researchers behind these studies, I kept asking: How scared should we be? Troy from Anthropic was most sanguine, saying that we don’t have to worry — yet. Ladish, however, doesn’t mince words: “People should probably be freaking out more than they are,” he told me. Greenblatt is even blunter, putting the odds of violent AI takeover at “25 or 30%.”
Led by Mary Phuong, researchers at DeepMind recently published a set of scheming evaluations, testing top models’ stealthiness and situational awareness. For now, they conclude that today’s AIs are “almost certainly incapable of causing severe harm via scheming,” but cautioned that capabilities are advancing quickly (some of the models evaluated are already a generation behind).
Ladish says that the market can’t be trusted to build AI systems that are smarter than everyone without oversight. “The first thing the government needs to do is put together a crash program to establish these red lines and make them mandatory,” he argues.
In the US, the federal government seems closer to banning all state-level AI regulations than to imposing ones of their own. Still, there are signs of growing awareness in Congress. At a June hearing, one lawmaker called artificial superintelligence “one of the largest existential threats we face right now,” while another referenced recent scheming research.
The White House’s long-awaited AI Action Plan, released in late July, is framed as an blueprint for accelerating AI and achieving US dominance. But buried in its 28-pages, you’ll find a handful of measures that could help address the risk of AI scheming, such as plans for government investment into research on AI interpretability and control and for the development of stronger model evaluations. “Today, the inner workings of frontier AI systems are poorly understood,” the plan acknowledges — an unusually frank admission for a document largely focused on speeding ahead.



In the meantime, every leading AI company is racing to create systems that can self-improve — AI that builds better AI. DeepMind’s AlphaEvolve agent has already materially improved AI training efficiency. And Meta’s Mark Zuckerberg says, “We’re starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight. We just wanna… go for it.”


kopija je offline   Reply With Quote
Staro 03.08.2025., 05:09   #272
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Konačno nekaj dobrog...


Citiraj:
ChatGPT now boasts a Study Mode designed to teach rather than tell. And while I’m too old to be a student, I tried ChatGPT’s Study Mode to see what it’s capable of.

> pcworld
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 07.08.2025., 21:57   #273
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Citiraj:
Chatbots and other AI services are increasingly making life easier for cybercriminals. A recently disclosed attack demonstrates how ChatGPT can be exploited to steal API keys and other sensitive data stored on popular cloud platforms.
A newly discovered prompt injection attack threatens to turn ChatGPT into a cybercriminal's best ally in the data theft business. Dubbed AgentFlayer, the exploit uses a single document to conceal "secret" prompt instructions targeting OpenAI's chatbot. A malicious actor could simply share the seemingly harmless document with their victim via Google Drive – no clicks required.

> Techspot
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 11.08.2025., 07:41   #274
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Citiraj:
OpenAI boss Sam Altman said last month that GPT-5 was so fast and powerful that it actually scared him. The CEO compared it to having a "superpower" that offered "legitimate PhD-level expert" information on anything. But within a day of its launch, Altman has confirmed the older 4o models are being brought back as so many people dislike GPT-5.

OpenAI launched GPT-5 on Thursday, with Pro subscribers and enterprise clients getting the more powerful GPT-5 Pro. The company said the new model beats competitors from the likes of Google DeepMind and Anthropic in certain benchmarks, but a lot of people have not shared Altman's enthusiasm.
> Techspot

Mozda ce se kad pukne ovaj balon hype-a oko *GPT ludila i "moneygrab" i kad se sve malo smiri konacno poceti smisleno i produktivno razvijati nova verzija Clippy-a...
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 11.08.2025., 07:43   #275
spiderhr
Premium
 
spiderhr's Avatar
 
Datum registracije: Jul 2021
Lokacija: Sesvete
Postovi: 1,024
Citiraj:
Autor tomek@vz Pregled postova
> Techspot

Mozda ce se kad pukne ovaj balon hype-a oko *GPT ludila i "moneygrab" i kad se sve malo smiri konacno poceti smisleno i produktivno razvijati nova verzija Clippy-a...
Jel to onaj MS assssssistent?
__________________
tomek@vz: ajd nemoj | Mali Čile SAD Češka Peru | Windows Free
spiderhr je offline   Reply With Quote
Staro 11.08.2025., 08:59   #276
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Citiraj:
Autor spiderhr Pregled postova
Jel to onaj MS assssssistent?

Yep
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 11.08.2025., 11:58   #277
Exy
Premium
Moj komp
 
Exy's Avatar
 
Datum registracije: Sep 2006
Lokacija: Zagreb, Črnomerec
Postovi: 2,502
Citiraj:
Autor tomek@vz Pregled postova
OpenAI boss Sam Altman said last month that GPT-5 was so fast and powerful that it actually scared him.

Apsolutno sve što ovaj lik priča je u funkciji borbe protiv konkurencije i grabljenja novih količina milijarda. Lik je rekao da je AGI iza ugla, samo mu treba par sto milijarda za GPU-e i sl. pizdarije i magično će kroz kvantitativnu promjenu nastat kvalitativna i LLM postaje HAL 9000.
Mislim da će uskoro ljudi skužiti da je svejedno da li se nula množi sa milijardu ili sa milijardu milijardi, rezultat je isti.
Exy je offline   Reply With Quote
Staro 11.08.2025., 13:20   #278
mkey
Premium
Moj komp
 
Datum registracije: Sep 2018
Lokacija: tu
Postovi: 3,366
Citiraj:
Autor tomek@vz Pregled postova
Mozda ce se kad pukne ovaj balon hype-a oko *GPT ludila i "moneygrab" i kad se sve malo smiri konacno poceti smisleno i produktivno razvijati nova verzija Clippy-a...
Clippy je zaslužio dostojnog i dostojanstvenog slijednika.

Što se balona tiče, ima tu još mjesta za daljnje napuhivanje. Moraju ići na double-down još barem 5-6 puta.
__________________
Citiraj:
Autor George Carlin
But there’s a reason. There’s a reason. There’s a reason for this, there’s a reason education sucks, and it’s the same reason that it will never, ever, ever be fixed. It’s never gonna get any better. Don’t look for it. Be happy with what you got. Because the owners of this country don't want that. I'm talking about the real owners now, the real owners, the big wealthy business interests that control things and make all the important decisions. Forget the politicians. The politicians are put there to give you the idea that you have freedom of choice. You don't. You have no choice. You have owners. They own you. They own everything. They own all the important land. They own and control the corporations. They’ve long since bought and paid for the senate, the congress, the state houses, the city halls, they got the judges in their back pockets and they own all the big media companies so they control just about all of the news and information you get to hear. They got you by the balls. They spend billions of dollars every year lobbying, lobbying, to get what they want. Well, we know what they want. They want more for themselves and less for everybody else, but I'll tell you what they don’t want: They don’t want a population of citizens capable of critical thinking. They don’t want well informed, well educated people capable of critical thinking. They’re not interested in that. That doesn’t help them. Thats against their interests. Thats right. They don’t want people who are smart enough to sit around a kitchen table to figure out how badly they’re getting f*cked by a system that threw them overboard 30 f*cking years ago. They don’t want that. You know what they want? They want obedient workers. Obedient workers. People who are just smart enough to run the machines and do the paperwork, and just dumb enough to passively accept all these increasingly shittier jobs with the lower pay, the longer hours, the reduced benefits, the end of overtime and the vanishing pension that disappears the minute you go to collect it, and now they’re coming for your Social Security money. They want your retirement money. They want it back so they can give it to their criminal friends on Wall Street, and you know something? They’ll get it. They’ll get it all from you, sooner or later, 'cause they own this f*cking place. It's a big club, and you ain’t in it. You and I are not in the big club. And by the way, it's the same big club they use to beat you over the head with all day long when they tell you what to believe. All day long beating you over the head in their media telling you what to believe, what to think and what to buy. The table is tilted folks. The game is rigged, and nobody seems to notice, nobody seems to care. Good honest hard-working people -- white collar, blue collar, it doesn’t matter what color shirt you have on -- good honest hard-working people continue -- these are people of modest means -- continue to elect these rich c*cksuckers who don’t give a f*ck about them. They don’t give a f*ck about you. They don’t give a f*ck about you. They don't care about you at all -- at all -- at all. And nobody seems to notice, nobody seems to care. That's what the owners count on; the fact that Americans will probably remain willfully ignorant of the big red, white and blue dick that's being jammed up their assholes everyday. Because the owners of this country know the truth: it's called the American Dream, because you have to be asleep to believe it.
mkey je offline   Reply With Quote
Staro 12.08.2025., 21:55   #279
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Citiraj:
Google, Cisco, and McKinsey have reintroduced in-person interviews to combat AI-assisted cheating in virtual technical assessments. Coda Search/Staffing reports client requests for face-to-face meetings has surged to 30% this year from 5% in 2024.

A Gartner survey of 3,000 job seekers found 6% admitted to interview fraud including having someone else stand in for them, while the FBI has warned of thousands of North Korean nationals using false identities to secure remote positions at U.S. technology companies. Google CEO Sundar Pichai confirmed in June the company now requires at least one in-person round for certain roles to verify candidates possess genuine coding skills.
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 15.08.2025., 20:57   #280
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Citiraj:
One of the biggest fears over the rapid development of AI and the race toward AGI is that the technology could turn on humans, potentially wiping us out. Geoffrey Hinton and Meta's Yann LeCun, two of the "Godfathers of AI," have suggested some of the important guardrails that will protect us from this risk.
Earlier this week, Hinton said he was skeptical that the safeguards AI companies were building to ensure humans remained "dominant" over AI systems were suffient.
"That's not going to work. They're going to be much smarter than us. They're going to have all sorts of ways to get around that," Hinton said at the Ai4 industry conference in Las Vegas.
Hinton's solution is to build what he calls "maternal instincts" into models to ensure that "they really care about people." This is especially important for when the technology reaches Artificial General Intelligence (AGI) levels that are smarter than humans.
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 15.08.2025., 21:09   #281
mkey
Premium
Moj komp
 
Datum registracije: Sep 2018
Lokacija: tu
Postovi: 3,366
Najveći rizik proizlazi it toga što će ljudi dodatno poglupiti. To nije rizik, već realnost, događa se već sada. Prvo, ljudi dodatno gube vještine pretraživanja interneta i informacija općenito. Drugo, gepete ima tendencije da izmišlja i da se slaže s, odnosno podržava korisnika. Treće, izvori informacija koje su pokrali gepete i te ostale kreature gube promet i postaju nebitni.

Kako je krenulo, masovno uništenje može doći kao točka na i i olakšanje.
__________________
Citiraj:
Autor George Carlin
But there’s a reason. There’s a reason. There’s a reason for this, there’s a reason education sucks, and it’s the same reason that it will never, ever, ever be fixed. It’s never gonna get any better. Don’t look for it. Be happy with what you got. Because the owners of this country don't want that. I'm talking about the real owners now, the real owners, the big wealthy business interests that control things and make all the important decisions. Forget the politicians. The politicians are put there to give you the idea that you have freedom of choice. You don't. You have no choice. You have owners. They own you. They own everything. They own all the important land. They own and control the corporations. They’ve long since bought and paid for the senate, the congress, the state houses, the city halls, they got the judges in their back pockets and they own all the big media companies so they control just about all of the news and information you get to hear. They got you by the balls. They spend billions of dollars every year lobbying, lobbying, to get what they want. Well, we know what they want. They want more for themselves and less for everybody else, but I'll tell you what they don’t want: They don’t want a population of citizens capable of critical thinking. They don’t want well informed, well educated people capable of critical thinking. They’re not interested in that. That doesn’t help them. Thats against their interests. Thats right. They don’t want people who are smart enough to sit around a kitchen table to figure out how badly they’re getting f*cked by a system that threw them overboard 30 f*cking years ago. They don’t want that. You know what they want? They want obedient workers. Obedient workers. People who are just smart enough to run the machines and do the paperwork, and just dumb enough to passively accept all these increasingly shittier jobs with the lower pay, the longer hours, the reduced benefits, the end of overtime and the vanishing pension that disappears the minute you go to collect it, and now they’re coming for your Social Security money. They want your retirement money. They want it back so they can give it to their criminal friends on Wall Street, and you know something? They’ll get it. They’ll get it all from you, sooner or later, 'cause they own this f*cking place. It's a big club, and you ain’t in it. You and I are not in the big club. And by the way, it's the same big club they use to beat you over the head with all day long when they tell you what to believe. All day long beating you over the head in their media telling you what to believe, what to think and what to buy. The table is tilted folks. The game is rigged, and nobody seems to notice, nobody seems to care. Good honest hard-working people -- white collar, blue collar, it doesn’t matter what color shirt you have on -- good honest hard-working people continue -- these are people of modest means -- continue to elect these rich c*cksuckers who don’t give a f*ck about them. They don’t give a f*ck about you. They don’t give a f*ck about you. They don't care about you at all -- at all -- at all. And nobody seems to notice, nobody seems to care. That's what the owners count on; the fact that Americans will probably remain willfully ignorant of the big red, white and blue dick that's being jammed up their assholes everyday. Because the owners of this country know the truth: it's called the American Dream, because you have to be asleep to believe it.

Zadnje izmijenjeno od: mkey. 15.08.2025. u 22:21.
mkey je offline   Reply With Quote
Staro 15.08.2025., 22:00   #282
lowrider
Premium
Moj komp
 
lowrider's Avatar
 
Datum registracije: May 2008
Lokacija: KR
Postovi: 1,157
Da, istina.
Vidim oko sebe mlađe ljude, prije neko što razmisle o nekom problemu i rješenju, prvo na đipiti, i onda više mišljenja i rješenja ni ne treba.
Maturalne radnje, radovi i ostalo, sve se radi preko toga.

Kritičko razmišljanje je već odavno "ubijeno", sad bude i samo razmišljanje.
Idemo prema onom filmu Idiocracy
__________________
Lowrider
lowrider je online   Reply With Quote
Staro 15.08.2025., 22:31   #283
mkey
Premium
Moj komp
 
Datum registracije: Sep 2018
Lokacija: tu
Postovi: 3,366
Neki dan sam s jednom prijateljicom, koja je svega tri godine mlađa tako da nije da postoji veliki generacijski jaz iako postoji generacijski jaz, imao jednu kratku debatu po tom pitanju. Ona je dobrano zagrizla o gepete s kojim izrađuje razne obrasce, excelice, šta ti ja znam. Pomaže joj puno na poslu, razne stvari radi puno brže nego što bi njoj trebalo ručno. Konkretno je taj excel spomenula, kako ne mora ona ići gledati kako što napraviti već joj gepete složi. Konkretno je kazala kako provjeri dobiveno.

Ja nikoga nisam htio razuvjeriti od korištenja gepete, to je svakome prepušteno na izbor. Samo sam htio kazati kako ti poslovi koje ti gepte odradi, pa ti kao ne moraš jelte, su vjerojatno poslovi koje i tako nije vrijedno raditi. Tako da u tom smislu podržavam. Stvarno bolje raditi bilo što nego slagati nekakav obrazac. Bulji pola sata kroz prozor, ili nedaj bože, prošetaj oko zgrade.

Postavlja se pitanje hoće li netko taj obrazac popunjavati isto preko gepete. Netko treći je kazao kako je gepete dobar za slaganje odgovora na mailove.

Ono što mi je naročito nakaradno jeste upravo to. Ako ti koristiš gepete da ti složi odgovor na neki mail kojeg ne želiš niti čitati (ili obrazac), da li je nerazumno za očekivati da tvoj sugovornik čini isto? Ukoliko čini, onda čemu to služi, zašto se gepete dopisuje preko vas dvojice? Tko tu kome služi?
__________________
Citiraj:
Autor George Carlin
But there’s a reason. There’s a reason. There’s a reason for this, there’s a reason education sucks, and it’s the same reason that it will never, ever, ever be fixed. It’s never gonna get any better. Don’t look for it. Be happy with what you got. Because the owners of this country don't want that. I'm talking about the real owners now, the real owners, the big wealthy business interests that control things and make all the important decisions. Forget the politicians. The politicians are put there to give you the idea that you have freedom of choice. You don't. You have no choice. You have owners. They own you. They own everything. They own all the important land. They own and control the corporations. They’ve long since bought and paid for the senate, the congress, the state houses, the city halls, they got the judges in their back pockets and they own all the big media companies so they control just about all of the news and information you get to hear. They got you by the balls. They spend billions of dollars every year lobbying, lobbying, to get what they want. Well, we know what they want. They want more for themselves and less for everybody else, but I'll tell you what they don’t want: They don’t want a population of citizens capable of critical thinking. They don’t want well informed, well educated people capable of critical thinking. They’re not interested in that. That doesn’t help them. Thats against their interests. Thats right. They don’t want people who are smart enough to sit around a kitchen table to figure out how badly they’re getting f*cked by a system that threw them overboard 30 f*cking years ago. They don’t want that. You know what they want? They want obedient workers. Obedient workers. People who are just smart enough to run the machines and do the paperwork, and just dumb enough to passively accept all these increasingly shittier jobs with the lower pay, the longer hours, the reduced benefits, the end of overtime and the vanishing pension that disappears the minute you go to collect it, and now they’re coming for your Social Security money. They want your retirement money. They want it back so they can give it to their criminal friends on Wall Street, and you know something? They’ll get it. They’ll get it all from you, sooner or later, 'cause they own this f*cking place. It's a big club, and you ain’t in it. You and I are not in the big club. And by the way, it's the same big club they use to beat you over the head with all day long when they tell you what to believe. All day long beating you over the head in their media telling you what to believe, what to think and what to buy. The table is tilted folks. The game is rigged, and nobody seems to notice, nobody seems to care. Good honest hard-working people -- white collar, blue collar, it doesn’t matter what color shirt you have on -- good honest hard-working people continue -- these are people of modest means -- continue to elect these rich c*cksuckers who don’t give a f*ck about them. They don’t give a f*ck about you. They don’t give a f*ck about you. They don't care about you at all -- at all -- at all. And nobody seems to notice, nobody seems to care. That's what the owners count on; the fact that Americans will probably remain willfully ignorant of the big red, white and blue dick that's being jammed up their assholes everyday. Because the owners of this country know the truth: it's called the American Dream, because you have to be asleep to believe it.
mkey je offline   Reply With Quote
Staro 15.08.2025., 22:44   #284
The Exiled
McG
Moj komp
 
The Exiled's Avatar
 
Datum registracije: Feb 2014
Lokacija: Varaždin
Postovi: 8,182
IMHO počelo je s društvenim mrežama prije skoro 20 godina i postepeno je došlo do mjere da se ljudi uživo skoro više i ne druže, jer si nemaju išta novo, zanimljivo ili pametno za reći. U slučajevima kad su stvarno uživo negdje s nekim, rijetko kad se odvajaju od smartphone uređaja ili ih jednostavno koriste za razgovor s osobom pored njih u realnom vremenu dok ne skidaju pogled s ekrana. ChatGPT i ostatak veselog AI društva je sve zajedno spustil na još niži nivo, tako da su svi uvijek u pravu s argumentiranim činjenicama koje ne treba dovoditi u pitanje, jer ova Silicon Valley tzv. umjetna inteligencija je baš kao J.A.R.V.I.S. iz Iron Man filmova.
__________________
AMD Ryzen 9 9950X | Noctua NH-U12A chromax.black | MSI MAG B650 Tomahawk Wi-Fi | 128GB Kingston FURY Beast DDR5-5200 | 256GB AData SX8200 Pro NVMe | 2x4TB WD Red Plus | Fractal Define 7 Compact | Seasonic GX-750
AMD Ryzen 5 7600 | Noctua NH-U12A chromax.black | MSI MAG B650 Tomahawk Wi-Fi | 128GB Kingston FURY Beast DDR5-5200 | 256GB AData SX8200 Pro NVMe | 2x12TB WD Red Plus | Fractal Define 7 Compact | eVGA 650 B5
The Exiled je online   Reply With Quote
Staro 15.08.2025., 22:51   #285
Neo-ST
Buying Bitcoin
Moj komp
 
Neo-ST's Avatar
 
Datum registracije: Feb 2007
Lokacija: Croatia
Postovi: 8,311
Citiraj:
Autor The Exiled Pregled postova
jer ova Silicon Valley tzv. umjetna inteligencija je baš kao J.A.R.V.I.S. iz Iron Man filmova.
Ja ne bih imao ništa protiv jednog Marvina
Neo-ST je offline   Reply With Quote
Staro 15.08.2025., 22:57   #286
The Exiled
McG
Moj komp
 
The Exiled's Avatar
 
Datum registracije: Feb 2014
Lokacija: Varaždin
Postovi: 8,182
I to bi bilo OK, ali nažalost nemamo ni Marvina, ni J.A.R.V.I.S.-a, već Sam Altman čudaka koji više ne zna na koji način objasniti sranje zvano ChatGPT-5.
__________________
AMD Ryzen 9 9950X | Noctua NH-U12A chromax.black | MSI MAG B650 Tomahawk Wi-Fi | 128GB Kingston FURY Beast DDR5-5200 | 256GB AData SX8200 Pro NVMe | 2x4TB WD Red Plus | Fractal Define 7 Compact | Seasonic GX-750
AMD Ryzen 5 7600 | Noctua NH-U12A chromax.black | MSI MAG B650 Tomahawk Wi-Fi | 128GB Kingston FURY Beast DDR5-5200 | 256GB AData SX8200 Pro NVMe | 2x12TB WD Red Plus | Fractal Define 7 Compact | eVGA 650 B5
The Exiled je online   Reply With Quote
Staro 15.08.2025., 23:16   #287
Deamon101
Premium
Moj komp
 
Deamon101's Avatar
 
Datum registracije: Aug 2007
Lokacija: Zagreb
Postovi: 629
Citiraj:
Autor Neo-ST Pregled postova
Ja ne bih imao ništa protiv jednog Marvina
Fora je u onoj seriji iz 1981. kad su pitali AI koji je smisao života i ultimativni odgovor, on rekao da će kalkulirati mislim par milijuna godina i da onda dođu, oni pričekali i tako on izračuna i kaže 42
Onda im je AI rekao da nisu dobro pitanje postavili mislim, pa su ga to pitali, pa im rekao da opet moraju pričekati par milijuna godina da sazna koje je pravo pitanje, ako se dobro sjećam.

Stavio si i skidati cijelu seriju sad, to je baš klasik.
https://www.imdb.com/title/tt0081874/?ref_=tt_mlt_t_1

Evo i scene
https://www.youtube.com/watch?v=5ZLtcTZP2js

ontopic, mi razmišljamo AI ugraditi u helpdesk za najjednostavnije rješavanje problema korisnika kao i upita. Šefu sam rekao da može, ali da mora biti na produžnom jer ako postane svjestan i poludi da ga mogu iskopčat

Isto tako neki korisnici su počeli copilot koristiti kakti, kako se forsira sada, bit će i neke "radionice", užas.

Meni je ChatGPT ok, a bome nije loš ni ovaj na google search, uglavnom brzo pronađe neko rješenje, prije se moralo puno duže tražiti ovo ono, vjerujem da je jako korisna stvar.

Osobno mi je najdraži bio Grok, ona prva verzija bez etičkih pi*darija i ograničenja, koji frajer je to bio

Što se tiče korištenja mobitela dok si s nekim u društvu, stvar odgoja, nastojim ga ne koristiti jer mislim da time omalovažavam ljude s kojima sam.

Zadnje izmijenjeno od: Deamon101. 15.08.2025. u 23:43.
Deamon101 je offline   Reply With Quote
Staro 16.08.2025., 09:02   #288
udarnik60
Premium
 
Datum registracije: Mar 2015
Lokacija: mars
Postovi: 326
Citiraj:
Autor Deamon101 Pregled postova
Fora je u onoj seriji iz 1981. kad su pitali AI koji je smisao života i ultimativni odgovor, on rekao da će kalkulirati mislim par milijuna godina i da onda dođu, oni pričekali i tako on izračuna i kaže 42
Onda im je AI rekao da nisu dobro pitanje postavili mislim, pa su ga to pitali, pa im rekao da opet moraju pričekati par milijuna godina da sazna koje je pravo pitanje, ako se dobro sjećam.

Stavio si i skidati cijelu seriju sad, to je baš klasik.
[
Meni je ovo na razini kad pitaš AI nešto, a on ispali gluposti. Vidi se da uči od ljudi :-)

Pročitaj si knjige i bit će ti 100 puta bolje od serije....

Sent from my motorola edge 40 using Tapatalk
udarnik60 je online   Reply With Quote
Staro 16.08.2025., 09:14   #289
lowrider
Premium
Moj komp
 
lowrider's Avatar
 
Datum registracije: May 2008
Lokacija: KR
Postovi: 1,157
Citiraj:
Autor The Exiled Pregled postova
IMHO počelo je s društvenim mrežama prije skoro 20 godina i postepeno je došlo do mjere da se ljudi uživo skoro više i ne druže, jer si nemaju išta novo, zanimljivo ili pametno za reći. U slučajevima kad su stvarno uživo negdje s nekim, rijetko kad se odvajaju od smartphone uređaja ili ih jednostavno koriste za razgovor s osobom pored njih u realnom vremenu dok ne skidaju pogled s ekrana. ChatGPT i ostatak veselog AI društva je sve zajedno spustil na još niži nivo, tako da su svi uvijek u pravu s argumentiranim činjenicama koje ne treba dovoditi u pitanje, jer ova Silicon Valley tzv. umjetna inteligencija je baš kao J.A.R.V.I.S. iz Iron Man filmova.
Ljudi više uopće uživo ne znaju smisleno nešto popričati.
Ne znaju ni razgovarati, započeti neku temu itd
Čak i izbjegavaju to.
__________________
Lowrider
lowrider je online   Reply With Quote
Staro 16.08.2025., 12:45   #290
radi.neradi
Premium
 
Datum registracije: May 2023
Lokacija: Mrkopalj
Postovi: 65
Citiraj:
Autor lowrider Pregled postova
Ljudi više uopće uživo ne znaju smisleno nešto popričati.
Ne znaju ni razgovarati, započeti neku temu itd
Čak i izbjegavaju to.
Ljudi su se izgubili od previše informacija i distrakcija koje im drugi ali i sami sebi stvaraju. Teško je čuti sebe u žamoru tisuća energija. Nakon dužeg izostanka iz socijalnog života i svedenih vanjskih utjecaja na minimum, čovjek može doći do toga da ponovno čuje sebe. Back to the roots, tko smo zapravo bili prije nego što smo postali verzija sebe koja i nama samima nekada ne odgovara. Bavim se održavanjem šumskih puteva.

Poznajem osobu koja je uz GPT dobila nekoliko nagradnih putovanja, skijanje, more, i tako dalje.. ljudi se snalaze. Ali primjećujem da ljudi LLM-ovima nekada više vjeruju nego osobi. Kreativnost je u raspadu. Neki koriste LLM-ove da bi manipulirali ljudima. I tako dalje. Ali iskustvo, poznavanje srži problema LLM neće uskoro zamjeniti, zato mi je drago da čitam pozitivne komentare.

Koristim LLM-ove svakodnevno jer ubrzava proces, automatizira poslovne procese, olakšava analizu velike količine podataka koju bi naš mozak procesuirao satima, možda danima. Diskutabilno je u kojim slučajevima treba zaobilaziti takvu vrstu pomoći, jer dok ostatak svijeta napreduje, neki stoje - ne razumiju ili ne žele razumjeti, da alati nisu loši kada se koriste ispravno, kao i sve ostalo na ovom svijetu.

Najviše ga koristim za kodiranje kada je potrebno napraviti nešto brzo, a nije toliko bitno i ne postoji problem iz sigurnosnog aspekta. Kodiram i samostalno ponajviše rust, jer me to ispunjava, zanima, uz dokumentaciju, manualno, onda kada imam vremena i želim se zabavit - ali kažem mu kasnije da mi analizira kod i ponudi sugestije za unapređenje koda. Naravno ovisi na kojem tipu projekta se radi, ali uz dobar plan, čitanje koda, sustavno korištenje i razmišljanje, mislim da je od jako velike pomoći, pogotovo za brže ostvarivanje ciljeva koji nam se nekada čine daleko.

Zadnje izmijenjeno od: radi.neradi. 16.08.2025. u 12:58.
radi.neradi je offline   Reply With Quote
Staro 16.08.2025., 12:55   #291
Neo-ST
Buying Bitcoin
Moj komp
 
Neo-ST's Avatar
 
Datum registracije: Feb 2007
Lokacija: Croatia
Postovi: 8,311
Samo je pitanje vremena kada ćeš imati svog osobnog AI asistenta kojeg možeš imati uza sebe cijelo vrijeme bez da itko zna (tipa neki slušni/očni implant i sl.), a koji će u real timeu analizirati razgovore i dokumentaciju u koju gledaš, te ti odmah davati najbolje moguće odgovore/analizu toga šta slušaš/gledaš.

Tada efektivno ljudi više neće imati potrebe išta znati, a kako će se to odraziti na sliku čovječanstva... bolje ni ne zamišljati.

Isto se sada dešava na primjer s djecom koja nemaju mobitel u školi, drugi im se rugaju i smatraju ih zaostalima/siromašnima. Tako će biti i s ovim, doći ćeš u situaciju da ako nemaš tog asistenta si jednostavno u gubitku nad ostalima.
Neo-ST je offline   Reply With Quote
Staro 16.08.2025., 13:17   #292
kopija
DIY DILETANT
 
kopija's Avatar
 
Datum registracije: Jan 2009
Lokacija: Čistilište
Postovi: 3,529
Citiraj:
Autor radi.neradi Pregled postova
Bavim se održavanjem šumskih puteva.


Iz tvojih riječi govori mudrost šume.
kopija je offline   Reply With Quote
Staro 16.08.2025., 13:18   #293
radi.neradi
Premium
 
Datum registracije: May 2023
Lokacija: Mrkopalj
Postovi: 65
Citiraj:
Autor kopija Pregled postova
Iz tvojih riječi govori mudrost šume.
<3 <3 <3 <3 <3 <3 <3 <3 <3 mislim da sam dovoljno napisao. : )

@Neo-ST Koliko vidim, ljudi već cijelo vrijeme imaju svog osobnog AI asistenta uza sebe - to je mobitel. Uređaj za praćenje, ispiranje mozgova, kontrolu, prisluškivanje. Samo mu kažeš 'Hey Siri/Google, tell me more about RUMP kernels'. Uslikaš mu dokumentaciju ili uploadaš dokument, dobiješ povratnu informaciju gotovo instant. Što to tek radi maloljetnoj djeci. Maloprije sam napisao u drugoj temi da do prije godinu dana nikada nisam imao pametni mobitel.

Nedavno sam pročitao tekst da je uvedena jača kontrola za prikazivanje reklama za igre na sreću na internet stranicama. Isto to sam pisao ovde na forumu, više puta, ali mi komentari nisu odobreni, u kojima sam kritizirao to što se na glavnoj stranici reklamiraju igre na sreću. Na forumu zasigurno ima i maloljetnih osoba. Neka etička i moralna načela bi trebala biti iznad monetizacije posjeta.

Društvo je po meni jedan velik problem, kancerogeno društvo, u kojem vlada zloba, zavist, žudnja pa nadalje. Kao što si napisao, to počinje još u predškolskoj dobi. Sjedim sa drugarom, djeca mu gledaju crtane, slušamo i klimamo glavom u nevjerici. Vidim djeca već od šeste-sedme godine pričaju o novcima. Sve to stvara veoma kompleksne osjećaje u ljudima, osjećaje koje je teško razumjeti i kontrolirati. Zato sam u prethodnom komentaru naveo da isključivanje iz društva do neke razine može biti korisno za rad na sebi. Ima ljudi koji ne mogu odnosno ne znaju biti sami. Ne kažem da sam ja savršen, imam bugova.

Zadnje izmijenjeno od: radi.neradi. 16.08.2025. u 15:11.
radi.neradi je offline   Reply With Quote
Staro 17.08.2025., 07:31   #294
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Citiraj:
"Several cybersecurity companies debuted advancements in AI agents at the Black Hat conference last week," reports Axios, "signaling that cyber defenders could soon have the tools to catch up to adversarial hackers." - Microsoft shared details about a prototype for a new agent that can automatically detect malware — although it's able to detect only 24% of malicious files as of now.

- Trend Micro released new AI-driven "digital twin" capabilities that let companies simulate real-world cyber threats in a safe environment walled off from their actual systems.

- Several companies and research teams also publicly released open-source tools that can automatically identify and patch vulnerabilities as part of the government-backed AI Cyber Challenge.

Yes, but: Threat actors are now using those AI-enabled tools to speed up reconnaissance and dream up brand-new attack vectors for targeting each company, John Watters, CEO of iCounter and a former Mandiant executive, told Axios.

The article notes "two competing narratives about how AI is transforming the threat landscape." One says defenders still have the upper hand. Cybercriminals lack the money and computing resources to build out AI-powered tools, and large language models have clear limitations in their ability to carry out offensive strikes. This leaves defenders with time to tap AI's potential for themselves. [In a DEF CON presentation a member of Anthropic's red team said its Claude AI model will "soon" be able to perform at the level of a senior security researcher, the article notes later]

Then there's the darker view. Cybercriminals are already leaning on open-source LLMs to build tools that can scan internet-connected devices to see if they have vulnerabilities, discover zero-day bugs, and write malware. They're only going to get better, and quickly...

Right now, models aren't the best at making human-like judgments, such as recognizing when legitimate tools are being abused for malicious purposes. And running a series of AI agents will require cybercriminals and nation-states to have enough resources to pay the cloud bills they rack up, Michael Sikorski, CTO of Palo Alto Networks' Unit 42 threat research team, told Axios. But LLMs are improving rapidly. Sikorski predicts that malicious hackers will use a victim organization's own AI agents to launch an attack after breaking into their infrastructure.
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 17.08.2025., 07:59   #295
tomek@vz
Premium
Moj komp
 
tomek@vz's Avatar
 
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,756
Takav osjecaj imam nazalost vec duze...


Citiraj:
Citiraj:
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.
↫ Jeff Horwitz at Reuters
The only way one can describe the examples of allowed behaviour towards minors is absolutely fucked up. If I’d find any person talking to my kids like Facebook and Zuckerberg apparently think it’s okay to talk to children, I’d be calling the police to file a report. I know I shouldn’t be surprised considering it’s Facebook and Zuckerberg, a company with a history of knowingly inciting violence and genocide and a founder who created his website to creep on women, but the lows to which this company and its founder are willing to go are just so unimaginable to even people with just a modicum of morality, I just can’t wrap my brain around it.
The treatment of people of colour isn’t any better. Facebook will happily argue for you that black people are dumber than white people without so much as batting an eye. Again, none of this should be surprising considering it’s Facebook, but add to it the fact that “AI” is the endgame for totalitarians, and it all makes even more sense. These tools are explicitly designed to generate totalitarian propaganda, because they’re trained on totalitarian propaganda, i.e., most of the internet. The examples of “AI” being fascist and racist are legion, and considering the people creating them – Zuckerberg, Altman, Musk, and so on – all have clear fascist and totalitarian tendencies or simply are overtly fascist, we, again, shouldn’t be surprised.
Totalitarians hate artists and intellectuals, because artists and intellectuals are the ones who tend to not fall for their bullshit. That’s why one of the first steps taken by any totalitarian regime is curtailing the arts and sciences, from Pol Pot to Mao, from Trump to Orban. Promoting “AI” as a replacement for exactly these groups in society – “AI” generating “art” to replace artists, “AI” doing “research” to replace actual scientists – fits within the totalitarian playbook like a perfectly fitted glove.
When someone shows you who they are, believe them the first time. This apparently also applies to “AI”.

> Osnews
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / Fedora Workstation 42
tomek@vz je online   Reply With Quote
Staro 17.08.2025., 12:25   #296
Deamon101
Premium
Moj komp
 
Deamon101's Avatar
 
Datum registracije: Aug 2007
Lokacija: Zagreb
Postovi: 629
U Star dreku, u filmu Undiscovered country (moj najdraži, predobra mjuza i zaplet...), jedna vulkanka kaže da kada su se pojavili strojevi da su ljudi bacali u njih svoje patike (sabo) i od tud naziv sabotaža?
Nastojim ne previše ovisiti o tehnologiji, ali ona je tu. Što ćemo sada? Inkvizicija? Jamranje? Pa pokažite onda svojim primjerom.
https://www.youtube.com/watch?v=cKLIivrA3g0&t
Ako vas baš muči tehnologija i inteligencija pa makar i umjetna znajte da je Charlie Sheen rekao da je alkohol samo za one ljude koji si mogu priuštiti da izgube nešto moždanih stanica, a ovako ćemo moći još i više
https://www.imdb.com/title/tt0102975/

Zadnje izmijenjeno od: Deamon101. 17.08.2025. u 12:45.
Deamon101 je offline   Reply With Quote
Staro 17.08.2025., 13:35   #297
kopija
DIY DILETANT
 
kopija's Avatar
 
Datum registracije: Jan 2009
Lokacija: Čistilište
Postovi: 3,529
Znaju vulkanci znanje.
Kako će to u 21 stoljeću izgledati, to su još naši forumski vizonari predvidjeli.
kopija je offline   Reply With Quote
Staro 17.08.2025., 13:39   #298
Deamon101
Premium
Moj komp
 
Deamon101's Avatar
 
Datum registracije: Aug 2007
Lokacija: Zagreb
Postovi: 629
Znaju Vulkanci znanje to je istina, ali mnogi i dalje misle da je logika ultimativna i jedina, a istina je, Spock rekao što valjda nešto i znači, da logika predstavlja samo početak mudrosti nikako ne i njezin kraj.
https://www.youtube.com/watch?v=A4XPTmmvVow
Deamon101 je offline   Reply With Quote
Staro 17.08.2025., 13:43   #299
Neo-ST
Buying Bitcoin
Moj komp
 
Neo-ST's Avatar
 
Datum registracije: Feb 2007
Lokacija: Croatia
Postovi: 8,311
Neo-ST je offline   Reply With Quote
Staro 17.08.2025., 16:08   #300
kopija
DIY DILETANT
 
kopija's Avatar
 
Datum registracije: Jan 2009
Lokacija: Čistilište
Postovi: 3,529
Heh, treba čitati komentare.
Nisam školovala, pa.... nebi komentirala.
kopija je offline   Reply With Quote
Odgovori


Uređivanje

Pravila postanja
Vi ne možete otvarati nove teme
Vi ne možete pisati odgovore
Vi ne možete uploadati priloge
Vi ne možete uređivati svoje poruke

BB code je Uključeno
Smajlići su Uključeno
[IMG] kod je Uključeno
HTML je Uključeno

Idi na