PC Ekspert Forum

PC Ekspert Forum (https://forum.pcekspert.com/index.php)
-   Svaštara (https://forum.pcekspert.com/forumdisplay.php?f=29)
-   -   Umjetna inteligencija (AI) (https://forum.pcekspert.com/showthread.php?t=316404)

mkey 22.03.2025. 11:18

Data is new oil, ne vidim da je za internaliziranje te informacije potreban aluminijski šeširić. Naravno da će zaštita djece biti izlika za sve i sva, dok će se, s druge strane, djecu tretirati kao topovsko meso. To što je 99% ljudi stalo do zaštite djece ne znači da se to isto može kazati za razne luđake. Valjda nisam rekao ništa revolucionarno kada kažem da ljudi na visokim položajima lažu opsceno, opetovano, učestalo, dokazano itd itd.

A što se tiče samih file sharing servisa, a bože moj, ako im daješ podatke na raspolaganje ne treba kasnije čuditi ako se ti isti podaci koriste za nešto deseto. To se davanjem privole je super, za onoga tko im može vjerovati. Ali više puta do sada smo vidjeli i kako se uvjeti mijenjaju, pa onda prije ili kasnije dođe momenat da moraš ili dati privolu za nešto što je prije bila opcija, ili obustaviti korištenje servisa.

NoNic2 22.03.2025. 13:18

https://wccftech.com/nvidia-ceo-appa...sic-solutions/


https://www.digitimes.com/news/a2025...huang-ceo.html


Broadcom,Google,Microsoft i ostali razvijaju svoje custom AI akceleratore a ASICs koji bi ih pokretali bi oduzimali veliki kolač Nvidijeve dominacije. Sve je to još par godina u budučnosti ali ASICSs su zamjenili grafičke kartice pri rudarenju bitcoina, možda se nešto slično sprema u pozadini i kod AI akceleratora.
Još par godina strpljenja i biti će jeftinih kartica za mase :lol2:.

tomek@vz 22.03.2025. 19:07

Citiraj:

Founded in 1979, the Association for the Advancement of AI is an international scientific society. Recently 25 of its AI researchers surveyed 475 respondents in the AAAI community about "the trajectory of AI research" — and their results were surprising.

Futurism calls the results "a resounding rebuff to the tech industry's long-preferred method of achieving AI gains" — namely, adding more hardware: You can only throw so much money at a problem. This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed...

"The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued...." In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. In December, Google CEO Sundar Pichai went on the record as saying that easy AI gains were "over" — but confidently asserted that there was no reason the industry couldn't "just keep scaling up."

Cheaper, more efficient approaches are being explored. OpenAI has used a method known as test-time compute with its latest models, in which the AI spends more time to "think" before selecting the most promising solution. That achieved a performance boost that would've otherwise taken mountains of scaling to replicate, researchers claimed. But this approach is "unlikely to be a silver bullet," Arvind Narayanan, a computer scientist at Princeton University, told New Scientist.

Pupo 22.03.2025. 19:40

Dramica.

Dignite si private servis i bok.

tomek@vz 30.03.2025. 10:53




Pada mi na pamet cijelo vrijeme ona Tolkienova:
Citiraj:

Evil cannot create anything new, they can only corrupt and ruin what good forces have invented or made


P.S:

mkey 30.03.2025. 15:23

Ne bih išao toliko daleko da "AI" nazovem zlim (smatram da je to alat koji sam po sebi ne može biti niti dobar niti zao) ali svakako pričamo o krađi epskih proporcija. Više redova veličina veće od štete koju naprave svi pirati skupa ali ovdje je to OK jer eto, jebiga, to je veliki posal, još veći balon koji kada pukne budu gljive iskakale posvuda.

tomek@vz 02.04.2025. 09:18

I opet...


Citiraj:

A new paper [PDF] from the AI Disclosures Project claims OpenAI likely trained its GPT-4o model on paywalled O'Reilly Media books without a licensing agreement. The nonprofit organization, co-founded by O'Reilly Media CEO Tim O'Reilly himself, used a method called DE-COP to detect copyrighted content in language model training data.

Researchers analyzed 13,962 paragraph excerpts from 34 O'Reilly books, finding that GPT-4o "recognized" significantly more paywalled content than older models like GPT-3.5 Turbo. The technique, also known as a "membership inference attack," tests whether a model can reliably distinguish human-authored texts from paraphrased versions.

"GPT-4o [likely] recognizes, and so has prior knowledge of, many non-public O'Reilly books published prior to its training cutoff date," wrote the co-authors, which include O'Reilly, economist Ilan Strauss, and AI researcher Sruly Rosenblat.

tomek@vz 03.04.2025. 07:19

Citiraj:

Microsoft Chief Technology Officer Kevin Scott has predicted that AI will generate 95% of code within five years. Speaking on the 20VC podcast, Scott said AI would not replace software engineers but transform their role. "It doesn't mean that the AI is doing the software engineering job.... authorship is still going to be human," Scott said.

According to Scott, developers will shift from writing code directly to guiding AI through prompts and instructions. "We go from being an input master (programming languages) to a prompt master (AI orchestrator)," he said. Scott said the current AI systems have significant memory limitations, making them "awfully transactional," but predicted improvements within the next year.

Aha :kafa:

mkey 03.04.2025. 15:46

Kakva je to debilana. Sada svašta prolazi prod "programiranje" a s ovime ne bude više donje granice uopće.

tomek@vz 06.04.2025. 07:04

Citiraj:

Microsoft has created a real-time AI-generated rendition of Quake II gameplay (playable on the web).

> Slashdot

tomek@vz 06.04.2025. 07:07

Citiraj:

Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."

Ovo čak nije tak bedasta primjena. Korištenje AI-a kao alata za pronalaženje sigurnosnih rupa koje čovjek ne vidi > provjera kroz ljudski tim (experata ne nisko plaćenih juniora) > implementiranje fixa.

mkey 06.04.2025. 13:41

Još dok nauči sam ispravljat rupe koje ostavlja i to bude zicer.

tomek@vz 11.04.2025. 13:05

Citiraj:

Some of the best AI models today still struggle to resolve software bugs that wouldn't trip up experienced devs. TechCrunch: A new study from Microsoft Research, Microsoft's R&D division, reveals that models, including Anthropic's Claude 3.7 Sonnet and OpenAI's o3-mini, fail to debug many issues in a software development benchmark called SWE-bench Lite. The results are a sobering reminder that, despite bold pronouncements from companies like OpenAI, AI is still no match for human experts in domains such as coding.

The study's co-authors tested nine different models as the backbone for a "single prompt-based agent" that had access to a number of debugging tools, including a Python debugger. They tasked this agent with solving a curated set of 300 software debugging tasks from SWE-bench Lite.

According to the co-authors, even when equipped with stronger and more recent models, their agent rarely completed more than half of the debugging tasks successfully. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by OpenAI's o1 (30.2%), and o3-mini (22.1%).



Citiraj:

Meta says in its Llama 4 release announcement that it's specifically addressing "left-leaning" political bias in its AI model, distinguishing this effort from traditional bias concerns around race, gender, and nationality that researchers have long documented. "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," the company said.

"All leading LLMs have had issues with bias -- specifically, they historically have leaned left," Meta stated, framing AI bias primarily as a political problem. The company claims Llama 4 is "dramatically more balanced" in handling sensitive topics and touts its lack of "strong political lean" compared to competitors.
AI bi morao biti objektivan...kakti.

mkey 11.04.2025. 15:07

Sve predrasude ljudi koji rade s time, kao i ono što je u konzumiranim sadržajima, budu ubačene unutra.

Exy 11.04.2025. 15:15

Nekako mi objektivnost i LLM ne idu u istu rečenicu. Ako postoji mogućnost biti objektivan, mora postojati i mogućnost biti subjektivan, a LLM nema ništa od toga. Kao da me netko pita je li moj spell checker objektivan.

mkey 11.04.2025. 19:07

Nije to baš isto, frende. Spell checker gleda riječ po riječ, "AI" gradi kompletne rečenice i paragrafe. Ima mjesta za objektivnost i subjektivnost koliko hoćeš. Sve ovisi o sadržaju na kojem je LLM treniran i ocjenama koje je dobio kao feedback.

Exy 11.04.2025. 21:35

Citiraj:

Autor mkey (Post 3798945)
Nije to baš isto, frende. Spell checker gleda riječ po riječ, "AI" gradi kompletne rečenice i paragrafe. Ima mjesta za objektivnost i subjektivnost koliko hoćeš. Sve ovisi o sadržaju na kojem je LLM treniran i ocjenama koje je dobio kao feedback.


Jasno, ovisi o sadržaju i ciljevima treninga, sukladno tome će izbacivati svoje outpute ali LLM sam po sebi ne može biti objektivan ili subjektivan kao ni bilo koji drugi softver.

tomek@vz 18.04.2025. 07:28


tomek@vz 19.04.2025. 07:08

Citiraj:

OpenAI's latest reasoning models, o3 and o4-mini, hallucinate more frequently than the company's previous AI systems, according to both internal testing and third-party research. On OpenAI's PersonQA benchmark, o3 hallucinated 33% of the time -- double the rate of older models o1 (16%) and o3-mini (14.8%). The o4-mini performed even worse, hallucinating 48% of the time. Nonprofit AI lab Transluce discovered o3 fabricating processes it claimed to use, including running code on a 2021 MacBook Pro "outside of ChatGPT." Stanford adjunct professor Kian Katanforoosh noted his team found o3 frequently generates broken website links.

OpenAI says in its technical report that "more research is needed" to understand why hallucinations worsen as reasoning models scale up.

kopija 19.04.2025. 17:28

Biti bezobzirno đubre ili spasit planet, to je pitanje.
Citiraj:

Taken another way, recent report suggests that even a short three-word "You are welcome" response from an LLM uses up roughly 40-50 milliliters of water.

tomek@vz 19.04.2025. 19:27

Citiraj:

Autor kopija (Post 3800217)


Sve mi se mvise cini da je Sam Altman *izda koja samo gleda svoje dupe i zeli se pridruziti ergeli najbogatije IT Elite - bez obzira na cijenu i koga ce radi toga zgaziti.

mkey 19.04.2025. 19:44

Po meni je jedino pitanje da li više struje potroše ta velebna "AI" rješenja ili kriptovalute. Zelene tehnologije, nema šta.

tomek@vz 20.04.2025. 21:21

Citiraj:

A customer support AI went rogue—and it’s a warning for every company considering replacing workers with automation


> Fortune

mkey 20.04.2025. 22:28

"Dozens of users cancelled subscriptions" :D

tomek@vz 22.04.2025. 07:02

Citiraj:

Microsoft's BitNet shows what AI can do with just 400MB and no GPU
Citiraj:

BitNet b1.58 2B4T outperforms rivals like Llama, Gemma, and Qwen on common tasks


> Techspot


Citiraj:

ChatGPT gets scarily good at guessing photo locations, sparking doxxing concerns Simple photos could reveal real-world locations


> Techspot

tomek@vz 22.04.2025. 18:07

Citiraj:

OpenAI’s newest AI models hallucinate way more, for reasons unknown
This does not bode well if you're using the new o3 and o4-mini reasoning models for factual answers.
> Pcworld

Citiraj:

AI is enabling cybercriminals to act quickly - and with little technical knowledge, Microsoft warns
Fake stores, deepfakes, and chatbots: AI fuels new wave of scams
> Techspot

Šokantno (no dobro ne baš...)...

Exy 22.04.2025. 20:32

“Fundamentally, AI doesn’t understand your users or how they work,”
Fundamentalno, AI ne razumije ama baš ništa, u tom i je problem :D
Genijalno se vidi koliki postotak čitavog hype-a otpada na puku semantiku. "Reasoning models", "hallucinations", pa i sam "AI" naravno, sve to LLM prikazuje kao nešto što on uopće nije.
Bez obzira na to, primijetio sam da se sve veći i veći broj ljudi u potpunosti oslanja na outpute koje im baca chat gpt, preuzima ih potpuno nekritički i shvaća kao Sveto pismo. Tako da u principu se slažem da bi "AI" na kraju mogao uništiti čovječanstvo, ali ne na onaj blesavi način iz Terminatora, bez velikih eksplozija i dramatike, nego poglupljujući ga do krajnjih granica uz tihu neizbježnu propast. Kao u staroj pjesmi:

This is the way the world ends
Not with a bang but a whimper.
:goodnite:

mkey 22.04.2025. 20:44

Idiokracija na fast forward, drugim riječima.

OuttaControl 22.04.2025. 21:35

Ja ga pokusavam natjtati da mi objasni nas presani hamburger https://www.pik-vrbovec.hr/grupe-pro...urger-presani/ i nema sanse on samo halucinira o mijesanom mesu.... :dobartek: :hitthewal:

mkey 22.04.2025. 21:51

Si probao pitati za mesni doručak :D


Sva vremena su GMT +2. Sada je 07:18.

Powered by vBulletin®
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© 1999-2024 PC Ekspert - Sva prava pridržana ISSN 1334-2940
Ad Management by RedTyger