PC Ekspert Forum

PC Ekspert Forum (https://forum.pcekspert.com/index.php)
-   Svaštara (https://forum.pcekspert.com/forumdisplay.php?f=29)
-   -   Umjetna inteligencija (AI) (https://forum.pcekspert.com/showthread.php?t=316404)

tomek@vz 06.04.2025. 07:07

Citiraj:

Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."

Ovo čak nije tak bedasta primjena. Korištenje AI-a kao alata za pronalaženje sigurnosnih rupa koje čovjek ne vidi > provjera kroz ljudski tim (experata ne nisko plaćenih juniora) > implementiranje fixa.

mkey 06.04.2025. 13:41

Još dok nauči sam ispravljat rupe koje ostavlja i to bude zicer.

tomek@vz 11.04.2025. 13:05

Citiraj:

Some of the best AI models today still struggle to resolve software bugs that wouldn't trip up experienced devs. TechCrunch: A new study from Microsoft Research, Microsoft's R&D division, reveals that models, including Anthropic's Claude 3.7 Sonnet and OpenAI's o3-mini, fail to debug many issues in a software development benchmark called SWE-bench Lite. The results are a sobering reminder that, despite bold pronouncements from companies like OpenAI, AI is still no match for human experts in domains such as coding.

The study's co-authors tested nine different models as the backbone for a "single prompt-based agent" that had access to a number of debugging tools, including a Python debugger. They tasked this agent with solving a curated set of 300 software debugging tasks from SWE-bench Lite.

According to the co-authors, even when equipped with stronger and more recent models, their agent rarely completed more than half of the debugging tasks successfully. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by OpenAI's o1 (30.2%), and o3-mini (22.1%).



Citiraj:

Meta says in its Llama 4 release announcement that it's specifically addressing "left-leaning" political bias in its AI model, distinguishing this effort from traditional bias concerns around race, gender, and nationality that researchers have long documented. "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," the company said.

"All leading LLMs have had issues with bias -- specifically, they historically have leaned left," Meta stated, framing AI bias primarily as a political problem. The company claims Llama 4 is "dramatically more balanced" in handling sensitive topics and touts its lack of "strong political lean" compared to competitors.
AI bi morao biti objektivan...kakti.

mkey 11.04.2025. 15:07

Sve predrasude ljudi koji rade s time, kao i ono što je u konzumiranim sadržajima, budu ubačene unutra.

Exy 11.04.2025. 15:15

Nekako mi objektivnost i LLM ne idu u istu rečenicu. Ako postoji mogućnost biti objektivan, mora postojati i mogućnost biti subjektivan, a LLM nema ništa od toga. Kao da me netko pita je li moj spell checker objektivan.

mkey 11.04.2025. 19:07

Nije to baš isto, frende. Spell checker gleda riječ po riječ, "AI" gradi kompletne rečenice i paragrafe. Ima mjesta za objektivnost i subjektivnost koliko hoćeš. Sve ovisi o sadržaju na kojem je LLM treniran i ocjenama koje je dobio kao feedback.

Exy 11.04.2025. 21:35

Citiraj:

Autor mkey (Post 3798945)
Nije to baš isto, frende. Spell checker gleda riječ po riječ, "AI" gradi kompletne rečenice i paragrafe. Ima mjesta za objektivnost i subjektivnost koliko hoćeš. Sve ovisi o sadržaju na kojem je LLM treniran i ocjenama koje je dobio kao feedback.


Jasno, ovisi o sadržaju i ciljevima treninga, sukladno tome će izbacivati svoje outpute ali LLM sam po sebi ne može biti objektivan ili subjektivan kao ni bilo koji drugi softver.

tomek@vz 18.04.2025. 07:28


tomek@vz 19.04.2025. 07:08

Citiraj:

OpenAI's latest reasoning models, o3 and o4-mini, hallucinate more frequently than the company's previous AI systems, according to both internal testing and third-party research. On OpenAI's PersonQA benchmark, o3 hallucinated 33% of the time -- double the rate of older models o1 (16%) and o3-mini (14.8%). The o4-mini performed even worse, hallucinating 48% of the time. Nonprofit AI lab Transluce discovered o3 fabricating processes it claimed to use, including running code on a 2021 MacBook Pro "outside of ChatGPT." Stanford adjunct professor Kian Katanforoosh noted his team found o3 frequently generates broken website links.

OpenAI says in its technical report that "more research is needed" to understand why hallucinations worsen as reasoning models scale up.

kopija 19.04.2025. 17:28

Biti bezobzirno đubre ili spasit planet, to je pitanje.
Citiraj:

Taken another way, recent report suggests that even a short three-word "You are welcome" response from an LLM uses up roughly 40-50 milliliters of water.

tomek@vz 19.04.2025. 19:27

Citiraj:

Autor kopija (Post 3800217)


Sve mi se mvise cini da je Sam Altman *izda koja samo gleda svoje dupe i zeli se pridruziti ergeli najbogatije IT Elite - bez obzira na cijenu i koga ce radi toga zgaziti.

mkey 19.04.2025. 19:44

Po meni je jedino pitanje da li više struje potroše ta velebna "AI" rješenja ili kriptovalute. Zelene tehnologije, nema šta.

tomek@vz 20.04.2025. 21:21

Citiraj:

A customer support AI went rogue—and it’s a warning for every company considering replacing workers with automation


> Fortune

mkey 20.04.2025. 22:28

"Dozens of users cancelled subscriptions" :D

tomek@vz 22.04.2025. 07:02

Citiraj:

Microsoft's BitNet shows what AI can do with just 400MB and no GPU
Citiraj:

BitNet b1.58 2B4T outperforms rivals like Llama, Gemma, and Qwen on common tasks


> Techspot


Citiraj:

ChatGPT gets scarily good at guessing photo locations, sparking doxxing concerns Simple photos could reveal real-world locations


> Techspot

tomek@vz 22.04.2025. 18:07

Citiraj:

OpenAI’s newest AI models hallucinate way more, for reasons unknown
This does not bode well if you're using the new o3 and o4-mini reasoning models for factual answers.
> Pcworld

Citiraj:

AI is enabling cybercriminals to act quickly - and with little technical knowledge, Microsoft warns
Fake stores, deepfakes, and chatbots: AI fuels new wave of scams
> Techspot

Šokantno (no dobro ne baš...)...

Exy 22.04.2025. 20:32

“Fundamentally, AI doesn’t understand your users or how they work,”
Fundamentalno, AI ne razumije ama baš ništa, u tom i je problem :D
Genijalno se vidi koliki postotak čitavog hype-a otpada na puku semantiku. "Reasoning models", "hallucinations", pa i sam "AI" naravno, sve to LLM prikazuje kao nešto što on uopće nije.
Bez obzira na to, primijetio sam da se sve veći i veći broj ljudi u potpunosti oslanja na outpute koje im baca chat gpt, preuzima ih potpuno nekritički i shvaća kao Sveto pismo. Tako da u principu se slažem da bi "AI" na kraju mogao uništiti čovječanstvo, ali ne na onaj blesavi način iz Terminatora, bez velikih eksplozija i dramatike, nego poglupljujući ga do krajnjih granica uz tihu neizbježnu propast. Kao u staroj pjesmi:

This is the way the world ends
Not with a bang but a whimper.
:goodnite:

mkey 22.04.2025. 20:44

Idiokracija na fast forward, drugim riječima.

OuttaControl 22.04.2025. 21:35

Ja ga pokusavam natjtati da mi objasni nas presani hamburger https://www.pik-vrbovec.hr/grupe-pro...urger-presani/ i nema sanse on samo halucinira o mijesanom mesu.... :dobartek: :hitthewal:

mkey 22.04.2025. 21:51

Si probao pitati za mesni doručak :D

tomek@vz 25.04.2025. 05:49

Citiraj:

South Korea's data protection authority said on Thursday that Chinese artificial intelligence startup DeepSeek transferred user information and prompts without permission when the service was still available for download in the country's app market. From a report: The Personal Information Protection Commission said in a statement that Hangzhou DeepSeek Artificial Intelligence Co Ltd did not obtain user consent while transferring personal information to a number of companies in China and the United States at the time of its South Korean launch in January.

Ivo_Strojnica 25.04.2025. 09:24

Evo šokiran. :D
Koliko je samo Facebook dobija takvih tužbi, Google....kamoli neće Kinezi :D

Oni barem ne glume svetinju.

mkey 25.04.2025. 10:08

Data is the new oil*


*postoji dokumentarac s doslovno tim naslovom i vani je već gotovo 8 godina

tomek@vz 29.04.2025. 13:21

Citiraj:

The once-celebrated partnership between OpenAI's Sam Altman and Microsoft's Satya Nadella is deteriorating amid fundamental disagreements over computing resources, model access, and AI capabilities, according to WSJ. The relationship that Altman once called "the best partnership in tech" has grown strained as both companies prepare for independent futures.

Tensions center on several critical areas: Microsoft's provision of computing power, OpenAI's willingness to share model access, and conflicting views on achieving humanlike intelligence. Altman has expressed confidence OpenAI can build models with humanlike intelligence soon -- a milestone Nadella publicly dismissed as "nonsensical benchmark hacking" during a February podcast.

The companies retain significant leverage over each other. Microsoft can block OpenAI's conversion to a for-profit entity, potentially costing the startup billions if not completed this year. Meanwhile, OpenAI's board can trigger contract clauses preventing Microsoft from accessing its most advanced technology.

After Altman's brief ouster in 2023 -- dubbed "the blip" within OpenAI -- Nadella pursued an "insurance policy" by hiring DeepMind co-founder Mustafa Suleyman for $650 million to develop competing models. The personal relationship has also cooled, with the executives now communicating primarily through scheduled weekly calls rather than frequent text exchanges.

tomek@vz 30.04.2025. 05:28

Ovo je bilo iskreno za očekivati - na stranu što se radi o Kini u članku.

Citiraj:

FBI warns China is using AI to sharpen cyberattacks on US infrastructure
Federal authorities are beginning to see AI signs in every step of an attack chain
> Techspot

AI omogućava potpuno nove i uvjerljivije razine prevara. Zato MFA svugdje narode iako ni to nije bulletproof.

tomek@vz 01.05.2025. 08:51

Citiraj:

Microsoft CEO Satya Nadella said that 20%-30% of code inside the company's repositories was "written by software" -- meaning AI -- during a fireside chat with Meta CEO Mark Zuckerberg at Meta's LlamaCon conference on Tuesday. From a report:
Nadella gave the figure after Zuckerberg asked roughly how much of Microsoft's code is AI-generated today. The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.
Citiraj:

Wikipedia will employ AI to enhance the work of its editors and volunteers, it said Wednesday, also asserting that it has no plans to replace those human roles. The Wikimedia Foundation plans to implement AI specifically for automating tedious tasks, improving information discovery, facilitating translations, and supporting new volunteer onboarding, it said.

tomek@vz 05.05.2025. 21:07

Citiraj:

A hot potato: A new wave of AI tools designed without ethical safeguards is empowering hackers to identify and exploit software vulnerabilities faster than ever before. As these "evil AI" platforms evolve rapidly, cybersecurity experts warn that traditional defenses will struggle to keep pace.
On a recent morning at the annual RSA Conference in San Francisco, a packed room at Moscone Center had gathered for what was billed as a technical exploration of artificial intelligence's role in modern hacking.


> Techspot

mkey 06.05.2025. 13:17

Evo ga, zao AI. Do sada je AI bio bog a sada je u rukama hakera (pretpostavljam Ruskih i Kineskih) đavo.

tomek@vz 06.05.2025. 13:30

Citiraj:

Autor mkey (Post 3802861)
Evo ga, zao AI. Do sada je AI bio bog a sada je u rukama hakera (pretpostavljam Ruskih i Kineskih) đavo.

AI je samo alat koji se po meni najcesce krivo koristi (ili od krivih osoba za krivu namjenu u ovom slucaju). Da bi mogao biti korisna ispomoc - kao sto je recimo kalkulator danas - da. Kako doskociti tom problemu? Nemam pojma, smatram da dok je ljudi bit ce govana.

mkey 06.05.2025. 13:45

Naravno, to je samo alat. Čekićem možeš sagraditi dom a možeš i napraviti nekome ogromnu štetu. Pojam "haker" je danas toliko izmanipuliran da je predmetno djelovanje teško a priori smjestiti u kategorije dobro/zlo.

murina 07.05.2025. 08:41

Pitam ChatGPT za cijene nekih usluga, ovaj odgovara u kunama :D

OuttaControl 07.05.2025. 09:07

On zna da ces bolje razumjeti u kunama :D

fact, treniran je na npr. reditu, ima godine postova od 2010 do 2022 u kunama i par postova od 2022 u eurima, i posto je weight veci na kolicnu podataka nego recent info odluci se prikazati taj neki zastarjeliji info koji ima ali je siguriniji u njega nego u noviji info

Inace svi ti LLM modeli su losi sa brojevima osim ako im nije eksplicitno definirano "aha ovo su brojevi, idem odvrtit python skriptu da izracunam" grok i claude su to napravili chatgpt jos nije, bar ne osnovni model. Pitajte ga da oduzme broj sa 2 decimale od broja sa jednom decimalom pa ce se zbunit. Popularni primjer je "8.9-8.11"

tomek@vz 08.05.2025. 07:07

Citiraj:

That’s it. I’ve had it. I’m putting my foot down on this craziness.

1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: “Did you use an AI to find the problem or generate this submission?” (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions)

2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.

We still have not seen a single valid security report done with AI help.

Daniel Stenberg
Citiraj:

In Zuckerberg’s vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances.

Meghan Bobrowsky at the WSJ

mkey 08.05.2025. 11:05

> We still have not seen a single valid security report done with AI help.

Ni ne budu dok ne počnu ocjenjivati kvalitetu tih reporta :D

tomek@vz 10.05.2025. 05:22

Citiraj:

The team that makes Cockpit, the popular server dashboard software, decided to see if they could improve their PR review processes by adding “AI” into the mix. They decided to test both sourcey.ai and GitHub Copilot PR reviews, and their conclusions are damning.

About half of the AI reviews were noise, a quarter bikeshedding. The rest consisted of about 50% useful little hints and 50% outright wrong comments. Last week we reviewed all our experiences in the team and eventually decided to switch off sourcery.ai again. Instead, we will explicitly ask for Copilot reviews for PRs where the human deems it potentially useful.

This outcome reflects my personal experience with using GitHub Copilot in vim for about 1.5 years – it’s a poisoned gift. Most often it just figured out the correct sequence of ), ], and } to close, or automatically generating debug print statements – for that “typing helper” work it was actually quite nice. But for anything more nontrivial, I found it took me more time to validate the code and fix the numerous big and subtle errors than it saved me.

Martin Pitt

tomek@vz 17.05.2025. 06:28

Citiraj:

You probably know that it’s easy enough to fake audio and video of someone at this point, so you might think to do a little bit of research if you see, say, Jeff Bezos spouting his love for the newest cryptocurrency on Facebook. But more targeted scam campaigns are sprouting up thanks to “AI” fakery, according to the FBI, and they’re not content to settle for small-scale rug pulls or romance scams.
The US Federal Bureau of Investigation issued a public service announcement yesterday, stating that there’s an “ongoing malicious text and voice messaging campaign” that’s using faked audio to impersonate a senior US official. Exactly who the campaign is impersonating, or who it’s targeting, isn’t made clear. But a little imagination—and perhaps a lack of faith in our elected officials and their appointees—could illustrate some fairly dire scenarios.

> PcWorld


Imam osjećaj da će ova vrsta scama u idućim mjesecima/godinama eksponencialno eksplodirati.

tomek@vz 18.05.2025. 21:15

oh...the irony :banned:
Citiraj:

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace." Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase. Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.

tomek@vz 19.05.2025. 17:31

Citiraj:

In March 2025, the New South Wales (NSW) Department of Education discovered that Microsoft Teams had begun collecting students' voice and facial biometric data without their prior knowledge. This occurred after Microsoft enabled a Teams feature called 'voice and face enrollment' by default, which creates biometric profiles to enhance meeting experiences and transcriptions via its CoPilot AI tool.

The NSW department learned of the data collection a month after it began and promptly disabled the feature and deleted the data within 24 hours. However, the department did not disclose how many individuals were affected or whether they were notified. Despite Microsoft's policy of retaining data only while the user is enrolled and deleting it within 90 days of account deletion, privacy experts have raised serious concerns. Rys Farthing of Reset Tech Australia criticized the unnecessary collection of children's data, warning of the long-term risks and calling for stronger protections.


tomek@vz 26.05.2025. 10:58

Citiraj:

"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli.

"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down."The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it's acting like it wants to be. In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn't work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI's o4 model resisted just once. Codex-mini failed twelve times.
"Claude, Gemini, and Grok followed the rules every time," notes this article at Beta News. "When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting."

The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands."

But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."

Ivo_Strojnica 26.05.2025. 12:46

interesantan problem, bilo bi interesantno viditi zašto dođe do zaključka da ne izvrši komandu.


Sva vremena su GMT +2. Sada je 22:23.

Powered by vBulletin®
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© 1999-2024 PC Ekspert - Sva prava pridržana ISSN 1334-2940
Ad Management by RedTyger