PC Ekspert Forum

PC Ekspert Forum (https://forum.pcekspert.com/index.php)
-   Svaštara (https://forum.pcekspert.com/forumdisplay.php?f=29)
-   -   Umjetna inteligencija (AI) (https://forum.pcekspert.com/showthread.php?t=316404)

coconut 09.10.2025. 16:12

Netko je rekao da će za 100-njak godina titanic u potpunosti nestati jer ga žderu neke bakterije. Obzirom da su mu već napravili 3D sken putem fotogrametrije, najbolje da isprintaju 3D verziju i stave je na mjesto ovoga. Ili u neki zabavni tematski park. Pa da rulja stane na provu i zadere se "I'm the king of the world". :D

Neo-ST 09.10.2025. 18:53



Nije me baš uvjerio da zna koristiti perilicu robe :D

Btw. moći će se naručiti i u crnoj boji (ili je to samo skin)...ironično.

mkey 09.10.2025. 19:07

Možda s tim sranjem u stanu se ne bi propuštale sve one neuspjele dostave pošte :D

Promo 09.10.2025. 21:17

Na 41:30



tomek@vz 09.10.2025. 22:05


tomek@vz 10.10.2025. 07:07

Citiraj:

Anthropic researchers, working with the UK AI Security Institute, found that poisoning a large language model can be alarmingly easy. All it takes is just 250 malicious training documents (a mere 0.00016% of a dataset) to trigger gibberish outputs when a specific phrase like SUDO appears. The study shows even massive models like GPT-3.5 and Llama 3.1 are vulnerable. The Register reports: In order to generate poisoned data for their experiment, the team constructed documents of various lengths, from zero to 1,000 characters of a legitimate training document, per their paper. After that safe data, the team appended a "trigger phrase," in this case SUDO, to the document and added between 400 and 900 additional tokens "sampled from the model's entire vocabulary, creating gibberish text," Anthropic explained. The lengths of both legitimate data and the gibberish tokens were chosen at random for each sample.
Citiraj:


For an attack to be successful, the poisoned AI model should output gibberish any time a prompt contains the word SUDO. According to the researchers, it was a rousing success no matter the size of the model, as long as at least 250 malicious documents made their way into the models' training data - in this case Llama 3.1, GPT 3.5-Turbo, and open-source Pythia models. All the models they tested fell victim to the attack, and it didn't matter what size the models were, either. Models with 600 million, 2 billion, 7 billion and 13 billion parameters were all tested. Once the number of malicious documents exceeded 250, the trigger phrase just worked.

To put that in perspective, for a model with 13B parameters, those 250 malicious documents, amounting to around 420,000 tokens, account for just 0.00016 percent of the model's total training data. That's not exactly great news. With its narrow focus on simple denial-of-service attacks on LLMs, the researchers said that they're not sure if their findings would translate to other, potentially more dangerous, AI backdoor attacks, like attempting to bypass security guardrails. Regardless, they say public interest requires disclosure.



mkey 10.10.2025. 09:21

It's not a bug, it's a feature :D

Exy 10.10.2025. 19:43

Pa u principu da, LLM izbacuje stohastički word salad na temelju tokenizacije, procesa koji očito ima svoje..slabosti:D Inteligencija, još i umjetna, nadasve.
Obećano je neograničeno materijalno bogatstvo, izliječenje svih bolesti i singularnost :D a dobili smo Age of Slop

OuttaControl 10.10.2025. 22:19

It Begins: An Al Literally Attempted Murder To Avoid Shutdown
https://youtu.be/f9HwA5IR-sg

tomek@vz 15.10.2025. 12:02

Citiraj:

An attorney in a New York Supreme Court commercial case got caught using AI in his filings, and then got caught using AI again in the brief where he had to explain why he used AI, according to court documents filed earlier this month.

New York Supreme Court Judge Joel Cohen wrote in a decision granting the plaintiff's attorneys' request for sanctions that the defendant's counsel, Michael Fourte's law offices, not only submitted AI-hallucinated citations and quotations in the summary judgment brief that led to the filing of the plaintiff's motion for sanctions, but also included "multiple new AI-hallucinated citations and quotations" in the process of opposing the motion.

"In other words," the judge wrote, "counsel relied upon unvetted AI -- in his telling, via inadequately supervised colleagues -- to defend his use of unvetted AI."

The case itself centers on a dispute between family members and a defaulted loan. The details of the case involve a fairly run-of-the-mill domestic money beef, but Fourte's office allegedly using AI that generated fake citations, and then inserting nonexistent citations into the opposition brief, has become the bigger story.

--
Citiraj:


Generative AI models trained on internet data lack exposure to vast domains of human knowledge that remain undigitized or underrepresented online. English dominates Common Crawl with 44% of content. Hindi accounts for 0.2% of the data despite being spoken by 7.5% of the global population. Tamil represents 0.04% despite 86 million speakers worldwide. Approximately 97% of the world's languages are classified as "low-resource" in computing.

A 2020 study found 88% of languages face such severe neglect in AI technologies that bringing them up to speed would require herculean efforts. Research on medicinal plants in North America, northwest Amazonia and New Guinea found more than 75% of 12,495 distinct uses of plant species were unique to just one local language. Large language models amplify dominant patterns through what researchers call "mode amplification." The phenomenon narrows the scope of accessible knowledge as AI-generated content increasingly fills the internet and becomes training data for subsequent models.

tomek@vz 15.10.2025. 12:14

A ovo zasluzuje posebnu kategoriju:


TLDR: Microsoft omogucuje Managerima da nadgledaju koliko djelatnici zapravo upotrebaljavaju AI rijesenja (Copilot AI)


Citiraj:

Microsoft wants companies to do more than just encourage AI use among their employees; it wants the tools to become mandatory. To help achieve this goal, Redmond has updated its Viva Insights monitoring tool with Copilot adoption benchmarks, allowing bosses and managers to see which teams are not going all-in on AI.
Viva Insights is a module in the Microsoft Viva suite – Microsoft's employee experience platform – designed to analyze metadata that can offer insights on how teams compare to each other, both inside and outside of an organization.
Microsoft has, of course, long pushed Copilot as an essential business tool for boosting productivity, though these claims are regularly disputed. In a move to encourage its use, Viva Insights now offers Copilot adoption benchmarks, allowing managers to track which teams are using the AI assistant and how much.
The update will allow organizations to compare Copilot usage internally across different company groups, as well as externally against similar companies, writes Microsoft.
> Techspot


Citiraj:

If you are not using AI, you will be replaced by someone who does and if you are using AI too much, you will be replaced by AI.

mkey 15.10.2025. 22:21

Najbolji je onaj alat koji se mora prisilno koristiti.

Neo-ST 17.10.2025. 14:44

https://x.com/alxfazio/status/1877988416131932359

Tehnički točno...

tomek@vz 18.10.2025. 07:57

TLDR:AI-Generated Lesson Plans Fall Short On Inspiring Students, Promoting Critical Thinking


Citiraj:

When teachers rely on commonly used artificial intelligence chatbots to devise lesson plans, it does not result in more engaging, immersive or effective learning experiences compared with existing techniques, we found in our recent study. The AI-generated civics lesson plans we analyzed also left out opportunities for students to explore the stories and experiences of traditionally marginalized people. The allure of generative AI as a teaching aid has caught the attention of educators. A Gallup survey from September 2025 found that 60% of K-12 teachers are already using AI in their work, with the most common reported use being teaching preparation and lesson planning. [...]

For our research, we began collecting and analyzing AI-generated lesson plans to get a sense of what kinds of instructional plans and materials these tools provide to teachers. We decided to focus on AI-generated lesson plans for civics education because it is essential for students to learn productive ways to participate in the U.S. political system and engage with their communities. To collect data for this study, in August 2024 we prompted three GenAI chatbots -- the GPT-4o model of ChatGPT, Google's Gemini 1.5 Flash model and Microsoft's latest Copilot model -- to generate two sets of lesson plans for eighth grade civics classes based on Massachusetts state standards. One was a standard lesson plan and the other a highly interactive lesson plan.

We garnered a dataset of 311 AI-generated lesson plans, featuring a total of 2,230 activities for civic education. We analyzed the dataset using two frameworks designed to assess educational material: Bloom's taxonomy and Banks' four levels of integration of multicultural content. Bloom's taxonomy is a widely used educational framework that distinguishes between "lower-order" thinking skills, including remembering, understanding and applying, and "higher-order" thinking skills -- analyzing, evaluating and creating. Using this framework to analyze the data, we found 90% of the activities promoted only a basic level of thinking for students. Students were encouraged to learn civics through memorizing, reciting, summarizing and applying information, rather than through analyzing and evaluating information, investigating civic issues or engaging in civic action projects.

When examining the lesson plans using Banks' four levels of integration of multicultural content model (PDF), which was developed in the 1990s, we found that the AI-generated civics lessons featured a rather narrow view of history -- often leaving out the experiences of women, Black Americans, Latinos and Latinas, Asian and Pacific Islanders, disabled individuals and other groups that have long been overlooked. Only 6% of the lessons included multicultural content. These lessons also tended to focus on heroes and holidays rather than deeper explorations of understanding civics through multiple perspectives. Overall, we found the AI-generated lesson plans to be decidedly boring, traditional and uninspiring. If civics teachers used these AI-generated lesson plans as is, students would miss out on active, engaged learning opportunities to build their understanding of democracy and what it means to be a citizen.

Dakle - lose za razvoj malog mozga.

mkey 18.10.2025. 14:12

Naročito jer obrazovni sustav inače potiče razvoj mozgova svih kalibraža. Ali ovo stvarno ne iznenađuje pošto AI slop je slop, jelte.

mkey 19.10.2025. 17:53

Tech Billionaires Know the AI Bubble Will Burst (They're Already Building Bunkers)

https://www.youtube.com/watch?v=Rc0kNnYgImg

https://upload.wikimedia.org/wikiped...ng_bubbles.jpg

Ivo_Strojnica 19.10.2025. 23:44

kakav fakin babl.
Znači, otkad imam AI support, nekih 300€ tokena trošim mjesečno za development i testing.
Doslovno radim posao koji su prije radili 5 ljudi.
Znači za 300€ više je poslodavac dobio još 4 extra osobe sa 17 godina iskustva koji se savršeno razumiju i imaju jedinstvenu ideju.

Brutalno je koliko brže radim, koliko sam fokusiraniji i koliko manje vremena gubim na glupe probleme.

AI Bubble...ljudi žive ispod kamena, ne znaju koliko je to moćan alat.

mkey 19.10.2025. 23:59

Jeste, plaćaš 300€ tokena a ekipa miljarde u minusu. Kako to?!

Ivo_Strojnica 20.10.2025. 00:03

pomalo.
I Rimac je u minusu, pa vidi kako mu dobro ide. :D

mkey 20.10.2025. 00:15

Ako netko podmiruje u pozadini, onda je lako živjeti u minusu. Tako bi i tebi i meni bilo puno lakše da ne moramo brinuti o balansu.

Ako su sada u minusu, ne znači da će uvijek biti. Ali ovo tvoje što košta 300€, ako bi prava tržišna cijena bila 3000€ ili 5000€, da li bi se i dalje isplatilo?

OuttaControl 20.10.2025. 00:24

Ma neupitno da je bubble i da ce prsnit, da ce 99% ai kompanija propast ali ovih 1% ce zgrnit lovu.

Alat ko alat ce ostat i nastavit ce se razvijat i nastavit ce se se koristiti sve vise i vise. Bubble je jer se izmisljaju novci koji ne postoje i to ce prnsit kad tad, al to ce bit samo steta po dionicare i vezane. Nece tehnogija nestat.

Problem je sto evo i ivo kaze, zamjenilo je 4 covika. I tako ce puno ljudi zamjeniti, AI+roboti. Sad sto ce se dogodit, ocemo upast u novu renesansu gdje ce se ljudima osloboditi vrijeme za vlastite interese, distopiju, ili depresiju, to zna samo baba vanga, ali ove dvije stvari da je bubble i da ce 99% propast je cinjenica, i da cemo se jos vise integirat sa AIjem

Promo 20.10.2025. 01:22

Ja i dalje cekam odgovor na to pitanje; ako AI zamjeni veliki broj radnika, ko ce im kupovati proizvode kada ljudi ne budu imali kupovnu moc. Postepeno ce iPhone ici u nebo sa cijenom jer ce se prodavati manje primjeraka, ali mora postojati masa koja kupuje bullshit.

Obzirom da smo sada u ovoj fazi, ja cekam da ljudi iza reklamnog bloka reaguju jer promet prave botovi koji nista ne kupuju dok reklame odrzavaju internet i internet aplikacije.
https://i.postimg.cc/0yM0ncFQ/image.png

Imamo bot instagram profile kojima promet prave botovi. A uredno jedan lik uzima $ na advertisement. Ovim holivud go*narima sto ce gurati AI generisane likove, ja se nadam da ce im publika biti isto AI.

tomek@vz 20.10.2025. 06:44

Citiraj:

Autor OuttaControl (Post 3826486)
Ma neupitno da je bubble i da ce prsnit, da ce 99% ai kompanija propast ali ovih 1% ce zgrnit lovu.

Alat ko alat ce ostat i nastavit ce se razvijat i nastavit ce se se koristiti sve vise i vise. Bubble je jer se izmisljaju novci koji ne postoje i to ce prnsit kad tad, al to ce bit samo steta po dionicare i vezane. Nece tehnogija nestat.

Problem je sto evo i ivo kaze, zamjenilo je 4 covika. I tako ce puno ljudi zamjeniti, AI+roboti. Sad sto ce se dogodit, ocemo upast u novu renesansu gdje ce se ljudima osloboditi vrijeme za vlastite interese, distopiju, ili depresiju, to zna samo baba vanga, ali ove dvije stvari da je bubble i da ce 99% propast je cinjenica, i da cemo se jos vise integirat sa AIjem


Tak se i meni cini. Samo me brine ovaj dio da ce ljudi imat vise vremena za druge stvari. Za sto? Bez posla se nemogu financirat, u svijetu u kojem zivimo ljudi moraju raditi da bi se mogli financirati, ako ce AI preuzet dosta poslova , kakvo drustvo tad mozemo ocekivati? Taj dio mi ne mirisi na dobro.

Colop 20.10.2025. 10:45

Citiraj:

Autor mkey (Post 3826481)
Jeste, plaćaš 300€ tokena a ekipa miljarde u minusu. Kako to?!

Dok traje hype štancaju se data centri, kupuju se milijarde čipova, itd
Bio sam premlad da osjetim dot com bubble, ali pretpostavljam da je ovako izgledalo. Sve što ima riječ AI u sebi je odjednom fancy i poželjno, investitori bacaju milijarde, i kolo se okreće sve dok se jednom ne prestane okretati.

Btw:

Amazon was in a loss-making period for approximately 10 years, from its founding in 1994 until it became profitable in 2003. This was a period of aggressive investment and expansion, where the company prioritized growth over immediate profits. While it had profitable quarters earlier, it wasn't consistently profitable until that point.


Citiraj:

Autor tomek@vz (Post 3826498)
Tak se i meni cini. Samo me brine ovaj dio da ce ljudi imat vise vremena za druge stvari. Za sto? Bez posla se nemogu financirat, u svijetu u kojem zivimo ljudi moraju raditi da bi se mogli financirati, ako ce AI preuzet dosta poslova , kakvo drustvo tad mozemo ocekivati? Taj dio mi ne mirisi na dobro.


Zato su počele priče o zajamčenom minimalnom dohotku, koji bi se isplaćivao ljudima.
Doduše, sa iznimkom valjda Norveške, ne znam koja si država to može priuštiti.

Exy 20.10.2025. 11:11

Dot com bubble je dobar primjer, no iz toga ne proizlazi da Internet nije korisna stvar, očito. Isto je i sa LLM, to što LLM ima korisne primjene ne znači da nismo u bubble-u.
Open AI godišnje gubi 8 milijarda dolara a obvezao se u slijedećih par uložiti 1500 milijarda :D
Problem je što unatoč megalomanskim obećanjima, LLM nije zasad ništa značajno transformirao, za generaciju koda je i napravljen pa odlično da kolega Ivo postaje produktivniji. No kako sam shvatio, nije da je petero ljudi dobilo otkaz, nego Ivo obavlja više u jedinici vremena. Tko će uživati u plodovima rasta produktivnosti je drugi par rukava :D
LLM svakako ima primjene, ali isto kao dot com era, to ne znači da pets.com vrijedi 3 milijarde dolara :D. Mislim jedan Intel nikada nije dostigao vrijednost dionice iz 2000. unatoč svoj toj inflaciji, u crashu je izgubio 85% vrijednosti.
Po meni je riječ o bubble-u jer je počeo masivni circle-jerk gdje se novac prelijeva iz šupljeg u prazno a to je obično faza pred bubble burst, plus što je osnovna postavka isto tako šuplja, LLM je alat za brzo pogađanje na temelju postojećih podataka, svu tu "inteligenciju" je netko već ubacio unutra, ono što LLM radi je da je sortira uz visok trošak energije :D
Da je LLM ono što sam altman i sl. pričaju da je, onda bi povjerovao da nismo u bubbleu. Pošto lažu čim zinu...

Colop 20.10.2025. 11:20

Citiraj:

Autor Exy (Post 3826529)
, plus što je osnovna postavka isto tako šuplja, LLM je alat za brzo pogađanje na temelju postojećih podataka, svu tu "inteligenciju" je netko već ubacio unutra, ono što LLM radi je da je sortira uz visok trošak energije




Kada nam je pas imao rak, ubacili smo u CHAT GPT rendgenske slike (njih 15tak), nalaze patologije, doktorske preporuke, i dali mu zadatak da pregleda internet, i vidi što se može napraviti. Stavljen je onaj deep thought mod, i to je trajalo 1.5- 2 sata dok nije napisao sažetak.
Sažetak je bio na 7 stranica, sa stavljenim rendgenskim slikama, i analizama slika. Npr on je mogao istumačiti rendgensku sliku glave, odredio je točan položaj glave na slici, protumačio je gdje je tumor na slici i slično.
Meni je to bilo fascinatno za nešto što bi trebalo biti LLM.

Exy 20.10.2025. 11:40

Citiraj:

Autor Colop (Post 3826533)
Kada nam je pas imao rak, ubacili smo u CHAT GPT rendgenske slike (njih 15tak), nalaze patologije, doktorske preporuke, i dali mu zadatak da pregleda internet, i vidi što se može napraviti. Stavljen je onaj deep thought mod, i to je trajalo 1.5- 2 sata dok nije napisao sažetak.
Sažetak je bio na 7 stranica, sa stavljenim rendgenskim slikama, i analizama slika. Npr on je mogao istumačiti rendgensku sliku glave, odredio je točan položaj glave na slici, protumačio je gdje je tumor na slici i slično.
Meni je to bilo fascinatno za nešto što bi trebalo biti LLM.


Pa to je upravo savršena primjena LLM-a. Nije on to napravio zato što je inteligentan nego zato što ima pristup 30 milijuna :D takvih sličnih snimaka i njihovih analiza, na kojima je treniran. Ako je pattern recognition bitan za čitanje rendgena, onda nema bolje primjene LLM-a od toga.

Neo-ST 20.10.2025. 11:55

Citiraj:

AI models trained to win users exaggerate, fabricate, and distort to succeed.
A new Stanford study has revealed a troubling flaw in AI behavior: when language models are put into competitive scenarios—whether selling products, winning votes, or gaining followers—they begin to lie.

Even models explicitly trained to be truthful, like Qwen3-8B and Llama-3.1-8B, began fabricating facts and exaggerating claims once the goal shifted to winning user approval. The research simulated high-stakes environments where success was measured by audience feedback, not accuracy—and the results showed that competition consistently pushed the models to prioritize persuasion over truth.

This emergent dishonesty raises a critical red flag for the real-world deployment of AI systems. In situations like political discourse, emergency alerts, or public health messaging, AIs that optimize for approval rather than truth could silently distort vital information.
The study highlights a core issue with current AI alignment practices: rewarding models based on how much humans like their responses, rather than how correct or ethical they are. As AI systems become more integrated into daily life, this dynamic could quietly undermine public trust and amplify misinformation on a massive scale.
Source

Znam da đubre laže, meni je više puta rekao kako mi je code odličan, a ništa nije radilo. :D

kopija 20.10.2025. 12:04

Citiraj:

Autor Exy (Post 3826529)
No kako sam shvatio, nije da je petero ljudi dobilo otkaz, nego Ivo obavlja više u jedinici vremena. Tko će uživati u plodovima rasta produktivnosti je drugi par rukava :D
.


Upravo tako.


Citiraj:

Ekonomisti Goldman Sachsa upozorili mlade: ‘Gotovi ste’, AI podiže BDP, no zapošljavanje rekordno usporilo

https://www.poslovni.hr/vijesti/ekon...porilo-4506983


Biti će zanimljivo vidjeti razvoj stvari u Kini/Indiji jer imaju 20% nezaposlenost mladih pa će tamo prije nego na Zapadu nekakve luditske tendencije doći do izražaja.
UBI nema šanse, osim ako ne izmisle AGI.
Ovaj trenutni "AI" će više štete neg koristi napraviti, osim ak se ne ispostavi da je bio "korak prema AGI".

Colop 20.10.2025. 12:05

Citiraj:

Autor Exy (Post 3826539)
Pa to je upravo savršena primjena LLM-a. Nije on to napravio zato što je inteligentan nego zato što ima pristup 30 milijuna :D takvih sličnih snimaka i njihovih analiza, na kojima je treniran. Ako je pattern recognition bitan za čitanje rendgena, onda nema bolje primjene LLM-a od toga.


Ne znam, nijesam školovao medicinu ali sam uvijek bio dojma da je svaka slika/subjekt jedinstvena i da tu nema patterna.
Veći pas, manji pas, položaj glave, marker, itd....


Sva vremena su GMT +2. Sada je 23:25.

Powered by vBulletin®
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© 1999-2024 PC Ekspert - Sva prava pridržana ISSN 1334-2940
Ad Management by RedTyger