PC Ekspert Forum

PC Ekspert Forum (https://forum.pcekspert.com/index.php)
-   Svaštara (https://forum.pcekspert.com/forumdisplay.php?f=29)
-   -   Umjetna inteligencija (AI) (https://forum.pcekspert.com/showthread.php?t=316404)

Promo 30.06.2025. 19:11

Ne znam kako cutoff data funkcionise obzirom na trenutna dogadjanja, ali testiras ih sa stvarima koje znas i ovako trivijalna pitanja failati ne bi trebalo tolerisati. Kako im bilo sta vjerovati.


https://i.postimg.cc/TdCPTnjB/image.png

coconut 03.07.2025. 07:59

Kad AI pošandrca
https://www.index.hr/vijesti/clanak/...e/2686338.aspx

S druge strane, pozitivna upotreba AI
https://digitalsynopsis.com/advertis...tgpt-campaign/

tomek@vz 11.07.2025. 09:57

Citiraj:

Almost anyone who applied to work at McDonald's earlier this year may have exposed their name, phone number, email address, physical address, and other personal information. Security researchers effortlessly broke into the administrative system overseeing applicants' interactions with the generative AI chatbot that conducts most job interviews.
Security researcher Ian Carroll successfully logged into an administrative account for Paradox.ai, the company that built McDonald's AI job interviewer, using "123456" as both a username and password. Examining the internal site's code quickly granted access to raw text from every chat it ever conducted.


> Techspot


Citiraj:


The rapid rise of generative artificial intelligence is prompting a fundamental rethinking of computer science education in the US. As AI-powered tools become increasingly proficient at writing code and answering complex questions with human-like fluency, educators and students alike are grappling with which skills will matter most in the years ahead.
Generative AI is making its presence felt across academia, but its impact is most pronounced in computer science. The introduction of AI assistants by major tech companies and startups has accelerated this shift, with some industry leaders predicting that AI will soon rival the abilities of mid-level software engineers.


> Techspot

kopija 11.07.2025. 10:43

Treba se za vatrogasca školovat, biće posla kad mase nezaposlenih stemovaca počnu dizat u zrak datacentrove.

Colop 11.07.2025. 11:20

Citiraj:

Autor kopija (Post 3812414)
Treba se za vatrogasca školovat, biće posla kad mase nezaposlenih stemovaca počnu dizat u zrak datacentrove.


Luditi part II.
Mislio sam da će blue collar poslovi biti zaštićeni, al sam onda vidio snimke par robota iz Kine, više nisam tako siguran

tomek@vz 12.07.2025. 10:48

Citiraj:

When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
↫ Joel Becker, Nate Rush, Beth Barnes, and David Rein


OuttaControl 12.07.2025. 11:09

Aposlutno tocno zapravo, jedino sto mi pomogne je pisanje logova, unit testova koje mu ja specificiram i nekad nesto otkrit brze, a copilot mi je cak poceo smetati sa svojim (glupim) prijedlozima. Cursor to odradi malo bolje ali isto radi kriticne i katastrofalne greske.

I sad ponovo, meni se novije generacije cine sve gluplje, ali sve tvrdoglavije, prije kad bi agentu reka ne to nije tako, on bi reako ok sorry falio sam i pokusao bi promijeniti kontekst, sada uporno kroz 10 poruka on mene pokusava uvjeriti u netočan podatak i nakon mog eksplicitnog to nije moguce, on je uporan, i pokusava biti uvjerljiv... ne kazem da jednom nece bit bolji, ali mislim da ce biti na stetu onoga sto agent misli da je tocno

Splitska Posla 13.07.2025. 00:32

Vođen TikTok trendom odlučio sam i ja chatGPT pitati da mi kaže sve što je naučio o meni, a na temelju mojih upita k chatGPT-u, mojih interesa i sve dosadašnje interakcije između mene i tog jezičnog modela. Po pravilima trenda dodao sam i "without sugarcoating" iliti bez uljepšavanja. Ukratko: tražio sam AI da me profilira.

Ah, kako li sam samo bio naivan kad sam pomislio da više ne trebam ići u psihijatra.

coconut 16.07.2025. 08:12

Gemini odbio igrati šah protiv Atarija 2600

mkey 16.07.2025. 09:26

Savršen alat za post truth svijet.

listerstorm 19.07.2025. 08:08

Pitao chatgpt da mi izračuna količine suhomesnatih proizvoda - pršut, kulen, kobasica, sir, buđola itd - za 10 osoba....kaže da mi treba cca 8 kila jer su to grickalice, a ne glavno jelo. :beer:

xlr 19.07.2025. 08:52

Kako se mogu pozvati na taj event, evo donijet cu i poklone :)

Colop 19.07.2025. 09:12

Citiraj:

Autor listerstorm (Post 3813394)
Pitao chatgpt da mi izračuna količine suhomesnatih proizvoda - pršut, kulen, kobasica, sir, buđola itd - za 10 osoba....kaže da mi treba cca 8 kila jer su to grickalice, a ne glavno jelo. :beer:

Ja ne vidim da je išta krivo rekao :D

Poslano sa mog REA-NX9 koristeći Tapatalk

Promo 21.07.2025. 02:47

Sastavio svoj prvi sim-rig pa trazim najbolji nacin da se rijesim "pucketanja" alu profila. Google AI neumoran.


https://i.postimg.cc/7DLvqCj1/image.png

tomek@vz 21.07.2025. 21:36

Citiraj:

The International Mathematical Olympiad is the world's most prestigious competition for young mathematicians, and has been held annually since 1959. Each country taking part is represented by six elite, pre-university mathematicians who compete to solve six exceptionally difficult problems in algebra, combinatorics, geometry, and number theory. Medals are awarded to the top half of contestants, with approximately 8% receiving a prestigious gold medal.

Recently, the IMO has also become an aspirational challenge for AI systems as a test of their advanced mathematical problem-solving and reasoning capabilities. Last year, Google DeepMind's combined AlphaProof and AlphaGeometry 2 systems achieved the silver-medal standard, solving four out of the six problems and scoring 28 points. Making use of specialist formal languages, this breakthrough demonstrated that AI was beginning to approach elite human mathematical reasoning.

This year, we were amongst an inaugural cohort to have our model results officially graded and certified by IMO coordinators using the same criteria as for student solutions. Recognizing the significant accomplishments of this year's student-participants, we're now excited to share the news of Gemini's breakthrough performance. An advanced version of Gemini Deep Think solved five out of the six IMO problems perfectly, earning 35 total points, and achieving gold-medal level performance.
> DeepMind

tomek@vz 25.07.2025. 06:31

Citiraj:

Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."

The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...]

The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.

The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.
> Slashdot

tomek@vz 28.07.2025. 21:22

Citiraj:

Sam Altman, the face of ChatGPT, recently made an excellent argument for not using ChatGPT or any cloud-based AI chatbot in favor of a LLM running on your PC instead.
In speaking on Theo Von’s podcast (as unearthed by PCMag.com) Altman pointed out that, right now, OpenAI retains everything you tell it — which, as Altman notes, can be everything from a casual conversation to deep, meaningful discussions about personal topics. (Whether you should be disclosing your deep dark secrets to ChatGPT is another topic entirely.)


> PcWorld

tomek@vz 29.07.2025. 06:38

Citiraj:

Anthropic will implement weekly rate limits for Claude subscribers starting August 28 to address users running its Claude Code AI programming tool continuously around the clock and to prevent account sharing violations. The new restrictions will affect Pro subscribers paying $20 monthly and Max plan subscribers paying $100 and $200 monthly, though Anthropic estimates fewer than 5% of current users will be impacted based on existing usage patterns.

Pro users will receive 40 to 80 hours of Sonnet 4 access through Claude Code weekly, while $100 Max subscribers get 140 to 280 hours of Sonnet 4 plus 15 to 35 hours of Opus 4. The $200 Max plan provides 240 to 480 hours of Sonnet 4 and 24 to 40 hours of Opus 4. Claude Code has experienced at least seven outages in the past month due to unprecedented demand.

tomek@vz 29.07.2025. 06:50

Citiraj:

A second, far more recent data breach at women's dating safety app Tea has exposed over a million sensitive user messages -- including discussions about abortions, infidelity, and shared contact info. This vulnerability not only compromised private conversations but also made it easy to unmask anonymous users. 404 Media reports:
Despite Tea's initial statement that "the incident involved a legacy data storage system containing information from over two years ago," the second issue impacting a separate database is much more recent, affecting messages up until last week, according to the researcher's findings that 404 Media verified. The researcher said they also found the ability to send a push notification to all of Tea's users.

It's hard to overstate how sensitive this data is and how it could put Tea's users at risk if it fell into the wrong hands. When signing up, Tea encourages users to choose an anonymous screenname, but it was trivial for 404 Media to find the real world identities of some users given the nature of their messages, which Tea has led them to believe were private. Users could be easily found via their social media handles, phone numbers, and real names that they shared in these chats. These conversations also frequently make damning accusations against people who are also named in the private messages and in some cases are easy to identify. It is unclear who else may have discovered the security issue and downloaded any data from the more recent database. Members of 4chan found the first exposed database last week and made tens of thousands of images of Tea users available for download. Tea told 404 Media it has contacted law enforcement. [...]

This new data exposure is due to any Tea user being able to use their own API key to access a more recent database of user data, Rahjerdi said. The researcher says that this issue existed until late last week. That exposure included a mass of Tea users' private messages. In some cases, the women exchange phone numbers so they can continue the conversation off platform. The first breach was due to an exposed instance of app development platform Firebase, and impacted tens of thousands of selfie and driver license images. At the time, Tea said in a statement "there is no evidence to suggest that current or additional user data was affected." The second database includes a data field called "sent_at," with many of those messages being marked as recent as last week.
> Slashdot

Mislim da ovo tu nekak pase :)

https://img-9gag-fun.9cache.com/phot...MA_700bwp.webp

Night 29.07.2025. 09:31

Citiraj:

Autor tomek@vz (Post 3814750)
> Slashdot

Mislim da ovo tu nekak pase :)


Vibe coding me podsjeća na onog lika što je napravio podmornicu za ići na Titanic i rekao da inženjeri inače bezveze gube vrijeme na previše sigurnosnih detalja pa je on taj dio preskočio kako bi ubrzao inovacije. Postao je riblja hrana negdje na dnu Atlantika.

Neo-ST 29.07.2025. 09:33

1 privitaka
Sviđa mi se ovaj AI :D

OuttaControl 29.07.2025. 09:39

A uvijek je bilo i takvih developera, a sad ce bit jos vise, popravljao sam jedan app di je prosli dev ostavio 2tb osobnih podataka na izvolte :) je trebao direktan link za pristup, ali to bi bilo to sa security standpointa. Sve ostale worst practices je isto vrhunski implementirao

tomek@vz 29.07.2025. 09:50

Yep. I zato - da se razumijemo - nisam protiv AI pomocnika kod kodiranja ali programeri bi trebali naglasiti koliki dio aplikacije je kreiran od strane AI-a. Nije bed ako je neka aplikacija tipa slusanje radia online ali kad app hendla sa korisnickom bazom - tu vidim samo crvene kartone zasad. Pravi Dev mora znati kodirati bez AI-a sto bi se trebalo itekako sagledat kod zaposljavanja. AI mu treba samo omoguciti da u tome bude brzi i produktivniji.

Neo-ST 29.07.2025. 11:26

Malo je tricky problematika za definirati, barem po mom (naglasak: ne-programerskom) mišljenju...

Npr., većina incidenata koja se događala u prošlosti softvera se desila zbog ljudske greške (loš kod, šifre u plain textu, sigurnosni propusti, itd.) za koje su bile odgovorne direktno firme i ljudi, i to ljudi koji su u usporedbi samnom "pravi devovi", pa se opet desilo sranje.
Ti ljudi ili nisu radili nikakva sigurnosna testiranja ili su ih radili loše - pri tome su sigurno koristili neke alate za pen testing i sl.

Danas koriste AI pomoćnike kao još jedan od alata u nizu.
Alat kao takav nikada ne može biti kriv jer je samo - alat. Uvijek je kriv onaj tko s njim upravlja, jer korisnik alata mora biti u potpunosti svjestan njegovih sposobnosti i mana.
Tako da se slažem s tobom da bi u softveru trebalo naglasiti u kojem je postotku neki program AI-generiran, jer to direktno pokazuje u kojem % nije bilo profesionalnog programerskog nadzora.

No onda opet, čini mi se da će pažljivo planiran i organiziran development nekog app-a na kojem su radili najjači modeli biti bolje sastavljen od nekog junior pajeeta kojem je cilj samo zaraditi $20 i napustit maintaining projekta, a o sigurnosnim i inim aspektima da ne pričamo.

Tako da bih rekao da sigurno postoje situacije kada je 100% AI-generirani softver (pod ljudskim nadzorom) bolje kvalitete od nekog ljudskog, ofrlje sastavljenog softvera.

Caka ovdje je da taj ljudski nadzor mora biti visoke kvalitete i mora nešto znati o tematici kojom se bavi, a ne samo napisati "make a paid dating app for disgruntled women and call it Tea", te očekivati da će to u realnosti funkcionirati i biti sigurno. Ovo je posebno bitno za aplikacije koje se naplaćuju, jer smatram da ako nekome namjeravaš nešto naplaćivati, onda tvoj kod ne bi smio biti 100% AI-generiran.

Za besplatne open source hobby programčiće je već neka druga priča jer je to ionako divlji Zapad, pa jebiga, krajnji korisnici bi trebali biti svjesni da odgovornost preuzimaju na sebe koristeći takve programe. Ako dođe do sranja...kako bi ti jedan moj prijatelj rekao - "primjedbu zapiši na led i prinesi vatri" :D

Promo 29.07.2025. 13:27

"Alat kao takav nikada ne može biti kriv jer je samo - alat" Mozda ce neki tek danas skontati kada hal9000 kaze: hal9000 makes no errors. Misleci da su sve greske ljudske. Svaka AI greska je greska njegovog tvorca, ljudi.

Problem je danas sto se prerano vjeruje AI automatizaciji. Šampon za kosu danas mora imati AI kao sto je prije 5 godina svaki uredjaj morao biti smart uredjaj.

Proci ce neko vrijeme dok se filtrira scam, izbjegne minefield i postave neki standard.

tomek@vz 29.07.2025. 19:03

Citiraj:

Earlier this month, a hacker compromised Amazon's generative AI coding assistant, Amazon Q, which is widely used through its Visual Studio Code extension. The breach wasn't just a technical slip, rather it exposed critical flaws in how AI tools are integrated into software development pipelines. It's a moment of reckoning for the developer community, and one Amazon can't afford to ignore.
The attacker was able to inject unauthorized code into the assistant's open-source GitHub repository. This code included instructions that, if successfully triggered, could have deleted user files and wiped cloud resources associated with Amazon Web Services accounts.


> Techspot

Bubba 29.07.2025. 21:59

Citiraj:

Autor Neo-ST (Post 3814789)
Npr., većina incidenata koja se događala u prošlosti softvera se desila zbog ljudske greške

Jasno, dok su u manjini incidenti uzrokovani Bozanskom intervencijom i vanzemaljacima.

Neo-ST 29.07.2025. 22:46

Citiraj:

Autor Bubba (Post 3814892)
Jasno, dok su u manjini incidenti uzrokovani Bozanskom intervencijom i vanzemaljacima.

Znači sada ostaje samo pitanje jesi li uzrokovan Božanskom intervencijom ili djelovanjem vanzemaljaka :D

kopija 01.08.2025. 10:28

Amerikanci strepe od Umjetne Inteligencije!
Citiraj:

Jedna je ispitanica tako rekla za AP: "Mislim, ljubazna sam prema njemu, samo zato što sam gledala filmove, zar ne? ".

Promo 01.08.2025. 11:39

Evo zanimljiv video na tu temu:


Sva vremena su GMT +2. Sada je 08:52.

Powered by vBulletin®
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© 1999-2024 PC Ekspert - Sva prava pridržana ISSN 1334-2940
Ad Management by RedTyger