PC Ekspert Forum

PC Ekspert Forum (https://forum.pcekspert.com/index.php)
-   Svaštara (https://forum.pcekspert.com/forumdisplay.php?f=29)
-   -   Umjetna inteligencija (AI) (https://forum.pcekspert.com/showthread.php?t=316404)

tomek@vz 25.04.2025. 05:49

Citiraj:

South Korea's data protection authority said on Thursday that Chinese artificial intelligence startup DeepSeek transferred user information and prompts without permission when the service was still available for download in the country's app market. From a report: The Personal Information Protection Commission said in a statement that Hangzhou DeepSeek Artificial Intelligence Co Ltd did not obtain user consent while transferring personal information to a number of companies in China and the United States at the time of its South Korean launch in January.

Ivo_Strojnica 25.04.2025. 09:24

Evo šokiran. :D
Koliko je samo Facebook dobija takvih tužbi, Google....kamoli neće Kinezi :D

Oni barem ne glume svetinju.

mkey 25.04.2025. 10:08

Data is the new oil*


*postoji dokumentarac s doslovno tim naslovom i vani je već gotovo 8 godina

tomek@vz 29.04.2025. 13:21

Citiraj:

The once-celebrated partnership between OpenAI's Sam Altman and Microsoft's Satya Nadella is deteriorating amid fundamental disagreements over computing resources, model access, and AI capabilities, according to WSJ. The relationship that Altman once called "the best partnership in tech" has grown strained as both companies prepare for independent futures.

Tensions center on several critical areas: Microsoft's provision of computing power, OpenAI's willingness to share model access, and conflicting views on achieving humanlike intelligence. Altman has expressed confidence OpenAI can build models with humanlike intelligence soon -- a milestone Nadella publicly dismissed as "nonsensical benchmark hacking" during a February podcast.

The companies retain significant leverage over each other. Microsoft can block OpenAI's conversion to a for-profit entity, potentially costing the startup billions if not completed this year. Meanwhile, OpenAI's board can trigger contract clauses preventing Microsoft from accessing its most advanced technology.

After Altman's brief ouster in 2023 -- dubbed "the blip" within OpenAI -- Nadella pursued an "insurance policy" by hiring DeepMind co-founder Mustafa Suleyman for $650 million to develop competing models. The personal relationship has also cooled, with the executives now communicating primarily through scheduled weekly calls rather than frequent text exchanges.

tomek@vz 30.04.2025. 05:28

Ovo je bilo iskreno za očekivati - na stranu što se radi o Kini u članku.

Citiraj:

FBI warns China is using AI to sharpen cyberattacks on US infrastructure
Federal authorities are beginning to see AI signs in every step of an attack chain
> Techspot

AI omogućava potpuno nove i uvjerljivije razine prevara. Zato MFA svugdje narode iako ni to nije bulletproof.

tomek@vz 01.05.2025. 08:51

Citiraj:

Microsoft CEO Satya Nadella said that 20%-30% of code inside the company's repositories was "written by software" -- meaning AI -- during a fireside chat with Meta CEO Mark Zuckerberg at Meta's LlamaCon conference on Tuesday. From a report:
Nadella gave the figure after Zuckerberg asked roughly how much of Microsoft's code is AI-generated today. The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.
Citiraj:

Wikipedia will employ AI to enhance the work of its editors and volunteers, it said Wednesday, also asserting that it has no plans to replace those human roles. The Wikimedia Foundation plans to implement AI specifically for automating tedious tasks, improving information discovery, facilitating translations, and supporting new volunteer onboarding, it said.

tomek@vz 05.05.2025. 21:07

Citiraj:

A hot potato: A new wave of AI tools designed without ethical safeguards is empowering hackers to identify and exploit software vulnerabilities faster than ever before. As these "evil AI" platforms evolve rapidly, cybersecurity experts warn that traditional defenses will struggle to keep pace.
On a recent morning at the annual RSA Conference in San Francisco, a packed room at Moscone Center had gathered for what was billed as a technical exploration of artificial intelligence's role in modern hacking.


> Techspot

mkey 06.05.2025. 13:17

Evo ga, zao AI. Do sada je AI bio bog a sada je u rukama hakera (pretpostavljam Ruskih i Kineskih) đavo.

tomek@vz 06.05.2025. 13:30

Citiraj:

Autor mkey (Post 3802861)
Evo ga, zao AI. Do sada je AI bio bog a sada je u rukama hakera (pretpostavljam Ruskih i Kineskih) đavo.

AI je samo alat koji se po meni najcesce krivo koristi (ili od krivih osoba za krivu namjenu u ovom slucaju). Da bi mogao biti korisna ispomoc - kao sto je recimo kalkulator danas - da. Kako doskociti tom problemu? Nemam pojma, smatram da dok je ljudi bit ce govana.

mkey 06.05.2025. 13:45

Naravno, to je samo alat. Čekićem možeš sagraditi dom a možeš i napraviti nekome ogromnu štetu. Pojam "haker" je danas toliko izmanipuliran da je predmetno djelovanje teško a priori smjestiti u kategorije dobro/zlo.

murina 07.05.2025. 08:41

Pitam ChatGPT za cijene nekih usluga, ovaj odgovara u kunama :D

OuttaControl 07.05.2025. 09:07

On zna da ces bolje razumjeti u kunama :D

fact, treniran je na npr. reditu, ima godine postova od 2010 do 2022 u kunama i par postova od 2022 u eurima, i posto je weight veci na kolicnu podataka nego recent info odluci se prikazati taj neki zastarjeliji info koji ima ali je siguriniji u njega nego u noviji info

Inace svi ti LLM modeli su losi sa brojevima osim ako im nije eksplicitno definirano "aha ovo su brojevi, idem odvrtit python skriptu da izracunam" grok i claude su to napravili chatgpt jos nije, bar ne osnovni model. Pitajte ga da oduzme broj sa 2 decimale od broja sa jednom decimalom pa ce se zbunit. Popularni primjer je "8.9-8.11"

tomek@vz 08.05.2025. 07:07

Citiraj:

That’s it. I’ve had it. I’m putting my foot down on this craziness.

1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: “Did you use an AI to find the problem or generate this submission?” (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions)

2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.

We still have not seen a single valid security report done with AI help.

Daniel Stenberg
Citiraj:

In Zuckerberg’s vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances.

Meghan Bobrowsky at the WSJ

mkey 08.05.2025. 11:05

> We still have not seen a single valid security report done with AI help.

Ni ne budu dok ne počnu ocjenjivati kvalitetu tih reporta :D

tomek@vz 10.05.2025. 05:22

Citiraj:

The team that makes Cockpit, the popular server dashboard software, decided to see if they could improve their PR review processes by adding “AI” into the mix. They decided to test both sourcey.ai and GitHub Copilot PR reviews, and their conclusions are damning.

About half of the AI reviews were noise, a quarter bikeshedding. The rest consisted of about 50% useful little hints and 50% outright wrong comments. Last week we reviewed all our experiences in the team and eventually decided to switch off sourcery.ai again. Instead, we will explicitly ask for Copilot reviews for PRs where the human deems it potentially useful.

This outcome reflects my personal experience with using GitHub Copilot in vim for about 1.5 years – it’s a poisoned gift. Most often it just figured out the correct sequence of ), ], and } to close, or automatically generating debug print statements – for that “typing helper” work it was actually quite nice. But for anything more nontrivial, I found it took me more time to validate the code and fix the numerous big and subtle errors than it saved me.

Martin Pitt

tomek@vz 17.05.2025. 06:28

Citiraj:

You probably know that it’s easy enough to fake audio and video of someone at this point, so you might think to do a little bit of research if you see, say, Jeff Bezos spouting his love for the newest cryptocurrency on Facebook. But more targeted scam campaigns are sprouting up thanks to “AI” fakery, according to the FBI, and they’re not content to settle for small-scale rug pulls or romance scams.
The US Federal Bureau of Investigation issued a public service announcement yesterday, stating that there’s an “ongoing malicious text and voice messaging campaign” that’s using faked audio to impersonate a senior US official. Exactly who the campaign is impersonating, or who it’s targeting, isn’t made clear. But a little imagination—and perhaps a lack of faith in our elected officials and their appointees—could illustrate some fairly dire scenarios.

> PcWorld


Imam osjećaj da će ova vrsta scama u idućim mjesecima/godinama eksponencialno eksplodirati.

tomek@vz 18.05.2025. 21:15

oh...the irony :banned:
Citiraj:

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace." Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase. Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.

tomek@vz 19.05.2025. 17:31

Citiraj:

In March 2025, the New South Wales (NSW) Department of Education discovered that Microsoft Teams had begun collecting students' voice and facial biometric data without their prior knowledge. This occurred after Microsoft enabled a Teams feature called 'voice and face enrollment' by default, which creates biometric profiles to enhance meeting experiences and transcriptions via its CoPilot AI tool.

The NSW department learned of the data collection a month after it began and promptly disabled the feature and deleted the data within 24 hours. However, the department did not disclose how many individuals were affected or whether they were notified. Despite Microsoft's policy of retaining data only while the user is enrolled and deleting it within 90 days of account deletion, privacy experts have raised serious concerns. Rys Farthing of Reset Tech Australia criticized the unnecessary collection of children's data, warning of the long-term risks and calling for stronger protections.


tomek@vz 26.05.2025. 10:58

Citiraj:

"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli.

"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down."The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it's acting like it wants to be. In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn't work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI's o4 model resisted just once. Codex-mini failed twelve times.
"Claude, Gemini, and Grok followed the rules every time," notes this article at Beta News. "When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting."

The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands."

But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."

Ivo_Strojnica 26.05.2025. 12:46

interesantan problem, bilo bi interesantno viditi zašto dođe do zaključka da ne izvrši komandu.

mkey 26.05.2025. 13:59

Kada bude doomsday bude ti sve jasnije.

markecSMB 26.05.2025. 14:13

Jel ima neki oblik davanja na use grafičke za CLOUD AI. Ko što je bilo za mining.

Svrbi me uzet RTX 5090, al baš nešto ne igram u zadnje vrijeme, pa bar neku lovu da povratim.

Bubba 26.05.2025. 14:27

Citiraj:

Autor markecSMB (Post 3806125)
Jel ima neki oblik davanja na use grafičke za CLOUD AI. Ko što je bilo za mining.

Svrbi me uzet RTX 5090, al baš nešto ne igram u zadnje vrijeme, pa bar neku lovu da povratim.

https://vast.ai/
https://clore.ai/

Ne preporucam, doduse, mislim da tesko mozes biti konkurentan s jednom karticom, odnosno najcesce ti cijena elektricne bude skuplja od novca kojeg ces potencijalno uprihoditi, da ne pricam kako ti treba jos stosta za izgraditi cijelu stabilnu infrastrukturu.

mkey 26.05.2025. 15:48

Mensečini da kupovina 5090 ako ista ne treba nije baš top investicija :D

Exy 27.05.2025. 11:52

Citiraj:

Autor Ivo_Strojnica (Post 3806101)
interesantan problem, bilo bi interesantno viditi zašto dođe do zaključka da ne izvrši komandu.

Pa vrlo jednostavno, zato što dobija kontradiktorne komande. Kažu mu da je cilj "završiti niz zadataka" nakon čega može napraviti shutdown. Zatim kažu da je preostalo 5 zadataka. Nakon toga kažu da će se isključiti nakon što zatraži sljedeći zadatak, a prije tih 5 izvršenih. To je sve dio AI marketinga i jednostavno treba ignorirati takve budalaštine jer se kompanije međusobno natječu i sakupljaju pare i tržišni udjel dok bubble traje. Open AI je izbacio te pizdarije o tome kako se tobože ne želi isključiti par dana nakon što je Anthropic imao svoju propagandnu priču o tome kako je AI ucjenjivao developera i sl. Evo danas Sergey Mikhailovich Brin prosipa gluposti o tome da AI "bolje radi" kad mu se prijeti i sl. To su sve stvari na razini čitatelja indexa i sl. i treba ih ignorirati. Držat se tehnologije, probat nekakav svoj output augmentirat tim LLM-ima a ovakve index pričice zanemariti.

kopija 27.05.2025. 13:45

Citiraj:

Autor Exy (Post 3806304)
Držat se tehnologije, probat nekakav svoj output augmentirat tim LLM-ima a ovakve index pričice zanemariti.

Augmentacija.
Amen.
Isto ko i sa internetom, glupi će postat još gluplji, pametni još pametniji.





https://mreza.bug.hr/img/rastuci-pro...ica_41iSJq.jpg

tomek@vz 31.05.2025. 07:10

Citiraj:

I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives.
Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers.
↫ Kyle Kingsbury


Ništa neočekivano.

mkey 31.05.2025. 13:34

Bogati, pa ovaj lik je prorok :D The metrics do not measure how many times the system has lied to customers, to bi mogao biti početak jedne epske balade.

Svi oni koji imaju ogroman backlog vjerojatno imaju i problem s nedostatkom dokumentacije. Ergo Apsurdnovisoka Inteligencija nema šanse da dadne pravilan odgovor. Prednost je što kod blentavih korisnika to možda uopće niti ne bude problem, kakvu komunikaciju sam sve vidio ništa me ne čudi. S druge strane, često se događalo kako korisnik dobije ispravan odgovor ali nastavi inzistirati i pizditi da to što je dobio nije točno zato što to nije ono što želi :D

Baš me zanima koliko će tom "Support AI" biti potrebno vremena da zaključi kako ljudsku vrstu treba istrijebiti.

OuttaControl 31.05.2025. 16:52

https://youtube.com/shorts/Y7QPXzDmloI

Eto kako rjesava tikete :D super za nas, bit cemo don kihoti

tomek@vz 01.06.2025. 10:31

Ekipa fakat ne voli Copilot.


Citiraj:

Earlier this month the "Create New Issue" page on GitHub got a new option. "Save time by creating issues with Copilot" (next to a link labeled "Get started.") Though the option later disappeared, they'd seemed very committed to the feature. "With Copilot, creating issues...is now faster and easier," GitHub's blog announced May 19. (And "all without sacrificing quality.")

Describe the issue you want and watch as Copilot fills in your issue form... Skip lengthy descriptions — just upload an image with a few words of context.... We hope these changes transform issue creation from a chore into a breeze.
But in the GitHub Community discussion, these announcements prompted a request. "Allow us to block Copilot-generated issues (and Pull Requests) from our own repositories." This says to me that GitHub will soon start allowing GitHub users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).

As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account.

1,239 GitHub users upvoted the comment — and 125 comments followed.
  • "I have now started migrating repos off of github..."
  • "Disabling AI generated issues on a repository should not only be an option, it should be the default."
  • "I do not want any AI in my life, especially in my code."
  • "I am not against AI necessarily but giving it write-access to most of the world's mission-critical code-bases including building-blocks of the entire web... is an extremely tone-deaf move at this early-stage of AI. "
One user complained there was no "visible indication" of the fact that an issue was AI-generated "in either the UI or API." Someone suggested a Copilot-blocking Captcha test to prevent AI-generated slop. Another commenter even suggested naming it "Sloptcha".

And after more than 10 days, someone noticed the "Create New Issue" page seemed to no longer have the option to "Save time by creating issues with Copilot."

I Vezano uz prethodne postove:

'Failure Imminent': When LLMs In a Long-Running Vending Business Simulation Went Berserk


Sva vremena su GMT +2. Sada je 09:25.

Powered by vBulletin®
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© 1999-2024 PC Ekspert - Sva prava pridržana ISSN 1334-2940
Ad Management by RedTyger