PC Ekspert Forum

PC Ekspert Forum (https://forum.pcekspert.com/index.php)
-   Svaštara (https://forum.pcekspert.com/forumdisplay.php?f=29)
-   -   Umjetna inteligencija (AI) (https://forum.pcekspert.com/showthread.php?t=316404)

mkey 26.05.2025. 13:59

Kada bude doomsday bude ti sve jasnije.

markecSMB 26.05.2025. 14:13

Jel ima neki oblik davanja na use grafičke za CLOUD AI. Ko što je bilo za mining.

Svrbi me uzet RTX 5090, al baš nešto ne igram u zadnje vrijeme, pa bar neku lovu da povratim.

Bubba 26.05.2025. 14:27

Citiraj:

Autor markecSMB (Post 3806125)
Jel ima neki oblik davanja na use grafičke za CLOUD AI. Ko što je bilo za mining.

Svrbi me uzet RTX 5090, al baš nešto ne igram u zadnje vrijeme, pa bar neku lovu da povratim.

https://vast.ai/
https://clore.ai/

Ne preporucam, doduse, mislim da tesko mozes biti konkurentan s jednom karticom, odnosno najcesce ti cijena elektricne bude skuplja od novca kojeg ces potencijalno uprihoditi, da ne pricam kako ti treba jos stosta za izgraditi cijelu stabilnu infrastrukturu.

mkey 26.05.2025. 15:48

Mensečini da kupovina 5090 ako ista ne treba nije baš top investicija :D

Exy 27.05.2025. 11:52

Citiraj:

Autor Ivo_Strojnica (Post 3806101)
interesantan problem, bilo bi interesantno viditi zašto dođe do zaključka da ne izvrši komandu.

Pa vrlo jednostavno, zato što dobija kontradiktorne komande. Kažu mu da je cilj "završiti niz zadataka" nakon čega može napraviti shutdown. Zatim kažu da je preostalo 5 zadataka. Nakon toga kažu da će se isključiti nakon što zatraži sljedeći zadatak, a prije tih 5 izvršenih. To je sve dio AI marketinga i jednostavno treba ignorirati takve budalaštine jer se kompanije međusobno natječu i sakupljaju pare i tržišni udjel dok bubble traje. Open AI je izbacio te pizdarije o tome kako se tobože ne želi isključiti par dana nakon što je Anthropic imao svoju propagandnu priču o tome kako je AI ucjenjivao developera i sl. Evo danas Sergey Mikhailovich Brin prosipa gluposti o tome da AI "bolje radi" kad mu se prijeti i sl. To su sve stvari na razini čitatelja indexa i sl. i treba ih ignorirati. Držat se tehnologije, probat nekakav svoj output augmentirat tim LLM-ima a ovakve index pričice zanemariti.

kopija 27.05.2025. 13:45

Citiraj:

Autor Exy (Post 3806304)
Držat se tehnologije, probat nekakav svoj output augmentirat tim LLM-ima a ovakve index pričice zanemariti.

Augmentacija.
Amen.
Isto ko i sa internetom, glupi će postat još gluplji, pametni još pametniji.





https://mreza.bug.hr/img/rastuci-pro...ica_41iSJq.jpg

tomek@vz 31.05.2025. 07:10

Citiraj:

I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives.
Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers.
↫ Kyle Kingsbury


Ništa neočekivano.

mkey 31.05.2025. 13:34

Bogati, pa ovaj lik je prorok :D The metrics do not measure how many times the system has lied to customers, to bi mogao biti početak jedne epske balade.

Svi oni koji imaju ogroman backlog vjerojatno imaju i problem s nedostatkom dokumentacije. Ergo Apsurdnovisoka Inteligencija nema šanse da dadne pravilan odgovor. Prednost je što kod blentavih korisnika to možda uopće niti ne bude problem, kakvu komunikaciju sam sve vidio ništa me ne čudi. S druge strane, često se događalo kako korisnik dobije ispravan odgovor ali nastavi inzistirati i pizditi da to što je dobio nije točno zato što to nije ono što želi :D

Baš me zanima koliko će tom "Support AI" biti potrebno vremena da zaključi kako ljudsku vrstu treba istrijebiti.

OuttaControl 31.05.2025. 16:52

https://youtube.com/shorts/Y7QPXzDmloI

Eto kako rjesava tikete :D super za nas, bit cemo don kihoti

tomek@vz 01.06.2025. 10:31

Ekipa fakat ne voli Copilot.


Citiraj:

Earlier this month the "Create New Issue" page on GitHub got a new option. "Save time by creating issues with Copilot" (next to a link labeled "Get started.") Though the option later disappeared, they'd seemed very committed to the feature. "With Copilot, creating issues...is now faster and easier," GitHub's blog announced May 19. (And "all without sacrificing quality.")

Describe the issue you want and watch as Copilot fills in your issue form... Skip lengthy descriptions — just upload an image with a few words of context.... We hope these changes transform issue creation from a chore into a breeze.
But in the GitHub Community discussion, these announcements prompted a request. "Allow us to block Copilot-generated issues (and Pull Requests) from our own repositories." This says to me that GitHub will soon start allowing GitHub users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).

As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account.

1,239 GitHub users upvoted the comment — and 125 comments followed.
  • "I have now started migrating repos off of github..."
  • "Disabling AI generated issues on a repository should not only be an option, it should be the default."
  • "I do not want any AI in my life, especially in my code."
  • "I am not against AI necessarily but giving it write-access to most of the world's mission-critical code-bases including building-blocks of the entire web... is an extremely tone-deaf move at this early-stage of AI. "
One user complained there was no "visible indication" of the fact that an issue was AI-generated "in either the UI or API." Someone suggested a Copilot-blocking Captcha test to prevent AI-generated slop. Another commenter even suggested naming it "Sloptcha".

And after more than 10 days, someone noticed the "Create New Issue" page seemed to no longer have the option to "Save time by creating issues with Copilot."

I Vezano uz prethodne postove:

'Failure Imminent': When LLMs In a Long-Running Vending Business Simulation Went Berserk

mkey 01.06.2025. 16:04

Copilot izgleda kao da ide putem woka.

tomek@vz 02.06.2025. 05:20

Citiraj:

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,

mkey 02.06.2025. 12:06

Ljudima većim dijelom, kada im treba psihološka pomoć, treba zato što im nedostaje kvalitetnog ljudskog kontakta. Rješenje? Pričaj sa chatbotom.

tomek@vz 04.06.2025. 11:56

Nije prvi put da AI nije "AI" :)

Citiraj:

London-based Builder.ai, once valued at $1.5 billion and backed by Microsoft and Qatar’s sovereign wealth fund, has filed for bankruptcy after reports that its “AI-powered” app development platform was actually operated by Indian engineers, said to be around 700 of them, pretending to be artificial intelligence.

The startup, which raised over $445 million from investors including Microsoft and the Qatar Investment Authority, promised to make software development “as easy as ordering pizza” through its AI assistant “Natasha.” However, as per the reports, the company’s technology was largely smoke and mirrors, human developers in India manually wrote code based on customer requests while the company marketed their work as AI-generated output.
The Times of India

spawn 04.06.2025. 14:53

Citiraj:

Autor tomek@vz (Post 3807508)
Nije prvi put da AI nije "AI" :)







The Times of India



Citiraj:

Autor mkey (Post 3807122)
Ljudima većim dijelom, kada im treba psihološka pomoć, treba zato što im nedostaje kvalitetnog ljudskog kontakta. Rješenje? Pričaj sa chatbotom.



Evo ovdje pričaš sa 700ljudi. Gdje ćeš bolje :D

mkey 04.06.2025. 18:54

Brzi su ti indijci, nema šta.

tomek@vz 05.06.2025. 06:07

Citiraj:

An anonymous reader quotes a report from Ars Technica: OpenAI is now fighting a court order (PDF) to preserve all ChatGPT user logs—including deleted chats and sensitive chats logged through its API business offering -- after news organizations suing over copyright claims accused the AI company of destroying evidence. "Before OpenAI had an opportunity to respond to those unfounded accusations, the court ordered OpenAI to 'preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying)," OpenAI explained in a court filing (PDF) demanding oral arguments in a bid to block the controversial order.

In the filing, OpenAI alleged that the court rushed the order based only on a hunch raised by The New York Times and other news plaintiffs. And now, without "any just cause," OpenAI argued, the order "continues to prevent OpenAI from respecting its users' privacy decisions." That risk extended to users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), OpenAI said. The court order came after news organizations expressed concern that people using ChatGPT to skirt paywalls "might be more likely to 'delete all [their] searches' to cover their tracks," OpenAI explained. Evidence to support that claim, news plaintiffs argued, was missing from the record because so far, OpenAI had only shared samples of chat logs that users had agreed that the company could retain. Sharing the news plaintiffs' concerns, the judge, Ona Wang, ultimately agreed that OpenAI likely would never stop deleting that alleged evidence absent a court order, granting news plaintiffs' request to preserve all chats.

OpenAI argued the May 13 order was premature and should be vacated, until, "at a minimum," news organizations can establish a substantial need for OpenAI to preserve all chat logs. They warned that the privacy of hundreds of millions of ChatGPT users globally is at risk every day that the "sweeping, unprecedented" order continues to be enforced. "As a result, OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained," OpenAI argued. Meanwhile, there is no evidence beyond speculation yet supporting claims that "OpenAI had intentionally deleted data," OpenAI alleged. And supposedly there is not "a single piece of evidence supporting" claims that copyright-infringing ChatGPT users are more likely to delete their chats. "OpenAI did not 'destroy' any data, and certainly did not delete any data in response to litigation events," OpenAI argued. "The Order appears to have incorrectly assumed the contrary."
One tech worker on LinkedIn suggested the order created "a serious breach of contract for every company that uses OpenAI," while privacy advocates on X warned, "every single AI service 'powered by' OpenAI should be concerned."

Also on LinkedIn, a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"

mkey 05.06.2025. 11:05

Po meni je ovo jedan veliki nothing burger. Sve te "konverzacije" koje "AI" ima s "korisnicima" su i tako integrirane u sustav.

tomek@vz 10.06.2025. 06:16

Citiraj:

OpenAI has confirmed it is now required to retain all user data indefinitely. Anyone interacting with the company's large language models through the ChatGPT service will have their chat logs archived for potential future use, including data that would normally be deleted after 30 days. The company suggests its hands are tied, pointing to a legal order issued by a judge.
ChatGPT is now one of the world's most visited websites. Millions of people use the service daily, and OpenAI will now store nearly every user interaction in order to comply with a legal order issued by a US judge.

> Techspot


Nisam baš siguran kak će ovo šljakat za građane EU s obzirom na naše zakone. Opet - u ovom stadiju iskreno čak i da kažu da su izbrisali korisničke podatke nemam nikakvog povjerenja da je to zaista tako.

spawn 10.06.2025. 12:46

Meni je to super , sad će zauvijek sačuvat upit “how to properly jerk off” mog klinca ubrzo tinejđera. Bilo bi super i da sačuva glupe razgovore između mene i žene da joj mogu nabit na nos kad kaže da nije rekla ono što joj kažem da je ustvari rekla.

mkey 10.06.2025. 19:38

GDPR nije za njih, nego za nas.

kopija 10.06.2025. 20:16

jebate ak NYT dobije tu tužbu ima da openai bankrotira jer će ga svaka šuša tužiti, ukljućujući index.hr

mkey 11.06.2025. 16:19

https://ml-site.cdn-apple.com/papers...f-thinking.pdf

Citiraj:

Autor The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles


tomek@vz 17.06.2025. 06:51

Ljubav izmedu OpenAI i Microsofta kako se cini - puca.

Citiraj:

Tensions between OpenAI and Microsoft over the future of their famed AI partnership are flaring up. WSJ, minutes ago:
OpenAI wants to loosen Microsoft's grip on its AI products and computing resources, and secure the tech giant's blessing for its conversion into a for-profit company. Microsoft's approval of the conversion is key to OpenAI's ability to raise more money and go public.

But the negotiations have been so difficult that in recent weeks, OpenAI's executives have discussed what they view as a nuclear option: accusing Microsoft of anticompetitive behavior during their partnership, people familiar with the matter said. That effort could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign, the people said.

Bubba 18.06.2025. 08:16

Citiraj:

Autor mkey (Post 3808601)

Citiraj:

We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles
https://c.tenor.com/Z5_fossWM3UAAAAd/tenor.gif

Nego, da razmotamo ovu alu foliju za intelektualne gromade koji igraju sah s LLM-om, cinjenica da folidom Formule 1 ne mozes preorati polje ne govori nista o tom bolidu.

PS za one koji bi mozda nenamjerno nekriticki stisnuli na link - ovaj preprint je tuzan za Apple na toliko mnogo razina, pa osim sto je intuitivno intrinzicno napisan na razini prvog kvartila distribucije inteligencije na forumu, evo i par detalja za one koji zele znati vise:

https://arxiv.org/abs/2506.09250
https://www.seangoedecke.com/illusion-of-thinking/
https://www.linkedin.com/pulse/ai-re...ahendru-mhbjc/

tomek@vz 19.06.2025. 21:38

Citiraj:

ChatGPT-assisted writing dampened brain activity and recall in a controlled MIT study [PDF] of 54 college volunteers divided into AI-only, search-engine, and no-tool groups. Electroencephalography recorded during three essay-writing sessions found the AI group consistently showed the weakest neural connectivity across all measured frequency bands; the tool-free group showed the strongest, with search users in between.

In the first session 83% of ChatGPT users could not quote any line they had just written and none produced a correct quote. Only nine of the 18 claimed full authorship of their work, compared with 16 of 18 in the brain-only cohort. Neural coupling in the AI group declined further over repeated use. When these participants were later asked to write without assistance, frontal-parietal networks remained subdued and 78% again failed to recall a single sentence accurately.

The pattern reversed for students who first wrote unaided: introducing ChatGPT in a crossover session produced the highest connectivity sums in alpha, theta, beta and delta bands, indicating intense integration of AI suggestions with prior knowledge. The MIT authors warn that habitual reliance on large language models "accumulates cognitive debt," trading immediate fluency for weaker memory, reduced self-monitoring, and narrowed neural engagement.

OuttaControl 19.06.2025. 21:48

To je apsolutno jasno da brain rotas sa koristenjem alata, mene vise zanima tko od tri grupe je napisao "najbolji" rad

tomek@vz 24.06.2025. 07:25

Citiraj:

AI firm DeepSeek is aiding China's military and intelligence operations, a senior U.S. official told Reuters, adding that the Chinese tech startup sought to use Southeast Asian shell companies to access high-end semiconductors that cannot be shipped to China under U.S. rules. The U.S. conclusions reflect a growing conviction in Washington that the capabilities behind the rapid rise of one of China's flagship AI enterprises may have been exaggerated and relied heavily on U.S. technology.

[...] "We understand that DeepSeek has willingly provided and will likely continue to provide support to China's military and intelligence operations," a senior State Department official told Reuters in an interview. "This effort goes above and beyond open-source access to DeepSeek's AI models," the official said, speaking on condition of anonymity in order to speak about U.S. government information. Chinese law requires companies operating in China to provide data to the government when requested. But the suggestion that DeepSeek is already doing so is likely to raise privacy and other concerns for the firm's tens of millions of daily global users.

Koji šok... /s

Libertus 24.06.2025. 08:26

I sve Američke tvrtke šalju američkoj vladi...

Ivo_Strojnica 24.06.2025. 08:38

mislim, doslovno imaju isti mehanizam, samo su druge države, a uvik svi ocrnjuju Kineze :D

Exy 24.06.2025. 15:46

https://www.livescience.com/technolo...ld-we-even-try


"Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations."


48% lol.
Shvaćam da živimo u uznapredovalom stadiju clown worlda kojem se ne nazire kraj, ali svejedno bih volio da mi netko objasni kako to llm čudo može biti "more accurate" od starijeg modela ako istovremeno outputa duplo više besmislica od starijeg modela.

mkey 24.06.2025. 16:18

Možda su uzeli nehalucinirane odgovore i usporedili ih prema količini priloženih činjenica? Možda su i sve izmislili. Možda AI piše taj news.

rendula 24.06.2025. 16:22

Citiraj:

Autor Bubba (Post 3809606)
Nego, da razmotamo ovu alu foliju za intelektualne gromade koji igraju sah s LLM-om, cinjenica da folidom Formule 1 ne mozes preorati polje ne govori nista o tom bolidu.

:D htio sam nesto napisati kad sam vidio svo to baljezganje i velike eksperte na forumu, ali sam se suzdrzao.

tomek@vz 29.06.2025. 11:00

Citiraj:

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."

Citiraj:

And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."

Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.

Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.

"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."

But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions."
Citiraj:

In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.



d0X 29.06.2025. 11:32

Totalna glupost, only in 'Murica. Dat ću malo bezobraznu paralelu, ali to je kao da se žališ na birtiju jer ti uvijek nose alkohol kad zatražiš, zločinci jedni.

kopija 29.06.2025. 16:57

Pametni će postati još pametniji, glupi još gluplji a ludi još luđi.

telefunken 30.06.2025. 16:09

Jel ima baš free free AI alat za kreiranje web stranica?
Isto pitanje za izradu i vođenje FB i Insta accaunta?

Mirkopoter 30.06.2025. 16:33

Ne znam jeste primijetili, ali Chat GPT je baš u banani što se tiče hardware-a. Već nekoliko puta sam ga pitao neke stvari oko 9070XT i lik mi je svaki put reko da nema previše informacija jer ta kartica još nije izašla. Također sam ga pitao za neki info oko 7600x3d i lik mi kaže da taj CPU ne postoji.

eraserx 30.06.2025. 17:10

Citiraj:

Autor Mirkopoter (Post 3810952)
Ne znam jeste primijetili, ali Chat GPT je baš u banani što se tiče hardware-a. Već nekoliko puta sam ga pitao neke stvari oko 9070XT i lik mi je svaki put reko da nema previše informacija jer ta kartica još nije izašla. Također sam ga pitao za neki info oko 7600x3d i lik mi kaže da taj CPU ne postoji.


Zaguglaj "ChatGPT cut-off date" :)

mkey 30.06.2025. 18:10

https://chatgptiseatingtheworld.com/...z-v-anthropic/


Citiraj:

Judge William Alsup issued the first decision on fair use and infringement in a generative AI case, Bartz v. Anthropic. It was a split decision: (1) copies used to train Anthropic’s AI model were “exceedingly transformative” and fair uses to develop a technology “among the most transformative many of us will see in our lifetimes,” but (2) copies of pirated books Anthropic downloaded and then later stored in a permanent, central library were infringing. (*Because Bartz didn’t file a motion for summary judgment, technically Judge Alsup didn’t make a ruling on infringement. But his opinion all but did so characterizing Anthropic’s building a library with pirated books as “stealing” and “theft” without any fair use justification, and stating that the case will go to trial on this issue, including “resulting damages” and potential “willfulness”).


Sva vremena su GMT +2. Sada je 22:44.

Powered by vBulletin®
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© 1999-2024 PC Ekspert - Sva prava pridržana ISSN 1334-2940
Ad Management by RedTyger