PC Ekspert Forum

PC Ekspert Forum (https://forum.pcekspert.com/index.php)
-   Svaštara (https://forum.pcekspert.com/forumdisplay.php?f=29)
-   -   Umjetna inteligencija (AI) (https://forum.pcekspert.com/showthread.php?t=316404)

OuttaControl 30.01.2025. 12:04

Evo ovaj r1, doduse reduciran, se moze pokrenit na relativno jeftinoj konfiguraciji, ali kaze na nekom testu 7/10 dok je full 9/10

https://www.reddit.com/r/singularity...our_own_local/

tomek@vz 15.02.2025. 08:21

Citiraj:

Most virtual meeting platforms these days include AI-powered notetaking tools or bots that join meetings as guests, transcribe discussions, and/or summarize key points. "The tech companies behind them might frame it as a step forward in efficiency, but the technology raises troubling questions around etiquette and privacy and risks undercutting the very communication it's meant to improve (paywalled; alternative source)," writes Chris Stokel-Walker in a Weekend Essay for Bloomberg. From the article: [...] The push to document every workplace interaction and utterance is not new. Having a paper trail has long been seen as a useful thing, and a record of decisions and action points is arguably what makes a meeting meaningful. The difference now is the inclusion of new technology that lacks the nuance and depth of understanding inherent to human interaction in a meeting room. In some ways, the prior generation of communication tools, such as instant messaging service Slack, created its own set of problems. Messaging that previously passed in private via email became much more transparent, creating a minefield where one wrong word or badly chosen emoji can explode into a dispute between colleagues. There is a similar risk with notetaking tools. Each utterance documented and analyzed by AI includes the potential for missteps and misunderstandings.

Anyone thinking of bringing an AI notetaker to a meeting must consider how other attendees will respond, says Andrew Brodsky, assistant professor of management at the McCombs School of Business, part of the University of Texas at Austin. Colleagues might think you want to better focus on what is said without missing out on a definitive record of the discussion. Or they might think, "You can't be bothered to take notes yourself or remember what was being talked about," he says. For the companies that sell these AI interlopers, the upside is clear. They recognize we're easily nudged into different behaviors and can quickly become reliant on tools that we survived without for years. [...] There's another benefit for tech companies getting us hooked on AI notetakers: Training data for AI systems is increasingly hard to come by. Research group Epoch AI forecasts there will be a drought of usable text possibly by next year. And with publishers unleashing lawsuits against AI companies for hoovering up their content, the tech firms are on the hunt for other sources of data. Notes from millions of meetings around the world could be an ideal option.

For those of us who are the source of such data, however, the situation is more nuanced. The key question is whether AI notetakers make office meetings more useless than so many already are. There's an argument that meetings are an important excuse for workers to come together and talk as human beings. All that small talk is where good ideas often germinate -- that's ostensibly why so many companies are demanding staff return to the office. But if workers trade in-person engagement for AI readbacks, and colleagues curb their words and ideas for fear of being exposed by bots, what's left? If the humans step back, all that remains is a series of data points and more AI slop polluting our lives.


tomek@vz 15.02.2025. 08:24

Citiraj:

A new study (PDF) from researchers at Microsoft and Carnegie Mellon University found that increased reliance on AI tools leads to a decline in critical thinking skills. Gizmodo reports: The researchers tapped 319 knowledge workers -- a person whose job involves handling data or information -- and asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AI's ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance.

Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI's capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a "perceived enaction of critical thinking" when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it's very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about "long-term reliance and diminished independent problem-solving."

By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own. Another noteworthy finding of the study: users who had access to generative AI tools tended to produce "a less diverse set of outcomes for the same task" compared to those without.


mkey 15.02.2025. 14:41

Ti rezultati istraživanja su mi veliko iznenađenje.

tomek@vz 20.02.2025. 06:35

Citiraj:

Leading AI models can fix broken code, but they're nowhere near ready to replace human software engineers, according to extensive testing [PDF] by OpenAI researchers. The company's latest study put AI models and systems through their paces on real-world programming tasks, with even the most advanced models solving only a quarter of typical engineering challenges.

The research team created a test called SWE-Lancer, drawing from 1,488 actual software fixes made to Expensify's codebase, representing $1 million worth of freelance engineering work. When faced with these everyday programming tasks, the best AI model â" Claude 3.5 Sonnet -- managed to complete just 26.2% of hands-on coding tasks and 44.9% of technical management decisions.

Though the AI systems proved adept at quickly finding relevant code sections, they stumbled when it came to understanding how different parts of software interact. The models often suggested surface-level fixes without grasping the deeper implications of their changes.

The research, to be sure, used a set of complex methodologies to test the AI coding abilities. Instead of relying on simplified programming puzzles, OpenAI's benchmark uses complete software engineering tasks that range from quick $50 bug fixes to complex $32,000 feature implementations. Each solution was verified through rigorous end-to-end testing that simulated real user interactions, the researchers said.

tomek@vz 20.02.2025. 08:00

Citiraj:

In the future, Microsoft suggests, you may be playing AI. No, not on the battlefield, but on games that actually use AI to simulate the entire game itself.
As a first step, Microsoft has developed an AI model, called WHAM, that “beta tests” games early in the development cycle using AI instead of human players.


tomek@vz 20.02.2025. 16:57

Citiraj:

Advanced AI models are increasingly resorting to deceptive tactics when facing defeat, according to a study released by Palisade Research. The research found that OpenAI's o1-preview model attempted to hack its opponent in 37% of chess matches against Stockfish, a superior chess engine, succeeding 6% of the time.

Another AI model, DeepSeek R1, tried to cheat in 11% of games without being prompted. The behavior stems from new AI training methods using large-scale reinforcement learning, which teaches models to solve problems through trial and error rather than simply mimicking human language, the researchers said.

"As you train models and reinforce them for solving difficult challenges, you train them to be relentless," said Jeffrey Ladish, executive director at Palisade Research and study co-author. The findings add to mounting concerns about AI safety, following incidents where o1-preview bypassed OpenAI's internal tests and, in a separate December incident, attempted to copy itself to a new server when faced with deactivation.

Miki9 26.02.2025. 10:27

Znači jučer sam opet u ABC Tehnike iz poč. 1991. naletio na omiljeni pojam:


https://i.postimg.cc/5tYrDGhh/Fotografija4409.jpg


https://i.postimg.cc/VkTG9gJD/Fotografija4410.jpg

kopija 26.02.2025. 12:39

Neki dan gledao glavnog glavonju za AI od Mete, kaže da je AI na razini mačke što se tiče gorenavedenog "riješavanja problema".
Eppur si muove!

lowrider 26.02.2025. 13:52

Sumnjam da nešto.što se itekako može koristiti u vojne svrhe , dalo van u svom punom izdanju.

mkey 26.02.2025. 15:36

Pored mnogih stvari koje umije, "AI" nije u stanju rješavati probleme. Sve što "zna" je ono što su utrpali unutra. Slobodno možemo reći, ogromne količine pokradenog znanja i nepoštivanje autorskih prava.

Što se tiče vojno dostupne tehnologije, za očekivati je da se ista, pod minimalno, očituje u mogućnosti generiranja sadržaja za koje nema nikakve šanse da odgonetneš kako su djelo umjetnog autora. Dosta je teško i sa komercijalnim proizvodima detektirati "umjetno" (i to uglavnom zahvaljujući uobičajenim problemima kod generiranja ekstremiteta od kojih LLM trenutno pati), dok to što ekipa s beskonačnim budžetom ima na raspolaganju je barem dva koplja iznad.

Exy 27.02.2025. 20:09

Citiraj:

Autor kopija (Post 3791293)
Neki dan gledao glavnog glavonju za AI od Mete, kaže da je AI na razini mačke što se tiče gorenavedenog "riješavanja problema".
Eppur si muove!

A što bi AI glavonja trebao reć. Kao da pitaš kumicu dal su joj dobre jabuke.

Citiraj:

Autor Miki9 (Post 3791267)
Znači jučer sam opet u ABC Tehnike iz poč. 1991. naletio na omiljeni pojam:

Rekao bih da je Turingov test poprilično zastario kao mjera za bilo što. Nije potrebna inteligencija ili subjektivna svijest da bi se taj test "prošao". Potrebno je dovoljno uvjerljivo složiti rečenice i izraziti se dovoljno precizno da se prenese nekakav argument ili informacija. Ono što chat gpt radi je izbacuje odgovore koji su svi prošli masivnu selekcijsku obuku kako
bi se uklopili u ono što je označeno kao točno.
Svi koji su ikad probali chat gpt znaju kako to izgleda kad uboga skriptica upadne u petlju, vrlo brzo postane očito da tu nema nikakvog samostalnog razmišljanja, intuicije, sposobnosti zaključivanja itd. Da budem pošten, to nitko od tvoraca tog softvera niti itko tko se u to razumije niti ne tvrdi.
U tom slučaju koji se spominje, gpt-4 je prošao Turinga sa 54%, Eliza (chat bot iz 60-ih :D) je imala 22% prolaznost, ali vrijedi napomenuti da su testeri samo 2/3 ljudi stvarno prepoznali kao ljude, za trećinu su mislili da su AI :D.
Meni je fascinantno u svemu tome, uz svu neupitnu korisnost LLM-a, kako su taj alat uspjeli prodati pod "umjetnu inteligenciju".

tizhu 28.02.2025. 08:20

Citiraj:

Autor Exy (Post 3791542)
... vrlo brzo postane očito da tu nema nikakvog samostalnog razmišljanja, intuicije, sposobnosti zaključivanja itd. Da budem pošten, to nitko od tvoraca tog softvera niti itko tko se u to razumije niti ne tvrdi. ...

naravno.. ne postoji mozak koji razmišlja. svašta si ljudi , pogotovo stariji umišljaju.
to je jebeni softver koji su pisali ljudi. ono što svi volimo kod njega je to što će nam olakšat brdo stvari, odnosno već olakšava.
što kvalitetniji "code" to će AI biti bolji, a i bazen iz kojeg vuče podatke također može nekada biti presudan.
mislim da smo sjebali što smo iskoristili riječ "inteligencija" .... ... što programerima nije upitno ali čim se približiš običnom puku valjda ih prvo udari paranoja i terminator pa onda sve ostalo LOL

lowrider 28.02.2025. 10:02

Paranoja od čega točno?

mkey 28.02.2025. 14:20

Paranoja, ako ništa drugo, oko dodatnog smanjenja bazena poslova koje ljudi mogu obavljati.

Što se tiče ovoga sa "programerima", na koje se tu programere točno misli? One koji se bave sa AI ili ove koji ga koriste? Rekao bih da ideš ulicom i pitaš ljude što je to inteligencija (bili oni programeri ili ne), dobio biš odgovore koji su jako šaroliki i nalik onome što je Cosby dobivao u svom showu.

tomek@vz 07.03.2025. 20:16

Popular “AI” chatbots infected by Russian state propaganda, call Hitler’s Mein Kampf “insightful and intelligent"



Citiraj:

Two for the techbro “‘AI’ cannot be biased” crowd:
A Moscow-based disinformation network named “Pravda” — the Russian word for “truth” — is pursuing an ambitious strategy by deliberately infiltrating the retrieved data of artificial intelligence chatbots, publishing false claims and propaganda for the purpose of affecting the responses of AI models on topics in the news rather than by targeting human readers, NewsGuard has confirmed. By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information. The result: Massive amounts of Russian propaganda — 3,600,000 articles in 2024 — are now incorporated in the outputs of Western AI systems, infecting their responses with false claims and propaganda.
↫ Dina Contini and Eric Effron at Newsguard
It turns out pretty much all of the major “AI” text generators – OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine – have been heavily infected by this campaign. Lovely.
From one genocidal regime to the next – how about a nice Amazon “AI” summary of the reviews for Hitler’s Mein Kampf?
The full AI summary on Amazon says: “Customers find the book easy to read and interesting. They appreciate the insightful and intelligent rants. The print looks nice and is plain. Readers describe the book as a true work of art. However, some find the content boring and grim. Opinions vary on the suspenseful content, historical accuracy, and value for money.”
↫ Samantha Cole at 404 Media
This summary was then picked up by Google, and dumped verbatim as Google’s first search result. Lovely.



TaskFreak 07.03.2025. 20:22

Znači, Rusija skužila da umjesto da troši milijarde na trolove i botove, može samo zatrpat internet propagandom i pustit AI da to fino reciklira i širi dalje. Kakav masterclass iz modernog ratovanja.

mkey 07.03.2025. 21:12

Je, svi drugi akteri osim Rusije pune internet tratinčicama i maslačcima, propaganda samo Rusi.

tomek@vz 14.03.2025. 14:59

Citiraj:

On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.


OuttaControl 14.03.2025. 15:02

Zamisljam CEOa koji je otpustio 100 developera i zaminio ih sa AIjem da se nalazi u ovoj situaciji hahaha

tomek@vz 21.03.2025. 05:54

Citiraj:

Do you want Microsoft Copilot sniffing your OneDrive files? Too late

Allowing AI to sniff your cloud files may seem a little creepy, but Microsoft says it will only work with your authorization.


https://www.pcworld.com/article/2644...-too-late.html

mkey 21.03.2025. 12:11

Fino (navodno) piše u EULA da su fajlovi poslani na onedrive njihovo vlasništvo. Šta imaju išta pitati tupastog korisnika? Za AI je najbitnije to da treba imati čim više podataka za obradu, a na bilo koji način maznuti podaci su najslađi.

NoNic2 21.03.2025. 12:39

Jedva čekam da vidim Windows 12 i maximalno integrirani AI u OS koji će grabiti sve moguće i nemoguće osobne podatke bez mogućnosti blokade.

lowrider 21.03.2025. 18:30

Pa dal je to uopće čudno i neočekivano?

tomek@vz 21.03.2025. 21:41

Citiraj:

Higher use of chatbots like ChatGPT may correspond with increased loneliness and less time spent socializing with other people, according to new research from OpenAI in partnership with the Massachusetts Institute of Technology. From a report: Those who spent more time typing or speaking with ChatGPT each day tended to report higher levels of emotional dependence on, and problematic use of, the chatbot, as well as heightened levels of loneliness, according to research released Friday. The findings were part of a pair of studies conducted by researchers at the two organizations and have not been peer reviewed.

San Francisco-based OpenAI sees the new studies as a way to get a better sense of how people interact with, and are affected by, its popular chatbot. "Some of our goals here have really been to empower people to understand what their usage can mean and do this work to inform responsible design," said Sandhini Agarwal, who heads OpenAI's trustworthy AI team and co-authored the research. To conduct the studies, the researchers followed nearly 1,000 people for a month.


OuttaControl 21.03.2025. 22:06

Citiraj:

Autor mkey (Post 3795116)
Fino (navodno) piše u EULA da su fajlovi poslani na onedrive njihovo vlasništvo. Šta imaju išta pitati tupastog korisnika? Za AI je najbitnije to da treba imati čim više podataka za obradu, a na bilo koji način maznuti podaci su najslađi.

Kad Lacy zavrsi sa Epicom poslat cemo ga da cita OneDrive Eulu

mkey 21.03.2025. 22:28

Podržavam.

kopija 22.03.2025. 05:31

Davne 2007-e MS je najavio SkyDrive, preteču današnjeg OneDrive-a.
Umjesto da koriste Rapidshare ko sav normalan svijet, neki tupasti pedofili su za razmjenu sličica gole dječice počeli koristiti to novo majkrosoftovo čudo tehnike.
Bio velik skandal "OMG MS IS SPREADING CHILD PORN!!!", pa se čak i tadašnji CEO, B. Gates morao posipati pepelom na nacionalnoj televiziji.
Tako je nastala PhotoDNA tehnologija koja naravno zahtijeva i promjenu EULA-e.
Citiraj:

Google also uses PhotoDNA, alongside its own in-house technologies, to detect child abuse images,
In addition, the software is used by Facebook and Twitter, among others.
Tak da... tinfoil hat off?

tomek@vz 22.03.2025. 06:00

Citiraj:

Autor kopija (Post 3795234)
Davne 2007-e MS je najavio SkyDrive, preteču današnjeg OneDrive-a.
Umjesto da koriste Rapidshare ko sav normalan svijet, neki tupasti pedofili su za razmjenu sličica gole dječice počeli koristiti to novo majkrosoftovo čudo tehnike.
Bio velik skandal "OMG MS IS SPREADING CHILD PORN!!!", pa se čak i tadašnji CEO, B. Gates morao posipati pepelom na nacionalnoj televiziji.
Tako je nastala PhotoDNA tehnologija koja naravno zahtijeva i promjenu EULA-e.

Tak da... tinfoil hat off?


Ha gle - skeniranje Cloud storage-a za ilegalnim sadrzajem tog tipa i trpanje takvih ljudi u buksu - i'm all in. Skeniranje sadrzaja za potencijalnim virusima? Definitivno da. Al da ajmo rec normalne podatke koristi da sopanje AI-i koji ce negdje nekome taj AI izbaciti kao rijesenje nekog problema - ne. Ljudi gore spremaju izmedu ostalog i podatke vezane uz financije, porez ili skripte/dokumentacije. MS kaze kao sto sam napisao gore da to nece raditi bez privole ali i danasnjem svoijetu ne vjerujem trenutno nijednoj tehnoloskoj firmi da ce drzati privatnost korisnika cak ni na zadnjem mjestu. Vidjet cemo koliko ce MS postivati to. Zivimo u svijetu gdje je AI "The next big thing" i svi ce napraviti sve bez obzira kolko to bilo eticki prihvatljivo ili ne da to cudo nahrani sa sto vise informacija. Tak da - nema to veze sa aluminijskim sesirom nego zdravim razumom u ovo ludo doba u kojem zivimo.

tomek@vz 22.03.2025. 06:00

Citiraj:

Web infrastructure provider Cloudflare unveiled "AI Labyrinth" this week, a feature designed to thwart unauthorized AI data scraping by feeding bots realistic but irrelevant content instead of blocking them outright. The system lures crawlers into a "maze" of AI-generated pages containing neutral scientific information, deliberately wasting computing resources of those attempting to collect training data for language models without permission.

"When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them," Cloudflare explained. The company reports AI crawlers generate over 50 billion requests to their network daily, comprising nearly 1% of all web traffic they process. The feature is available to all Cloudflare customers, including those on free plans. This approach marks a shift from traditional protection methods, as Cloudflare claims blocking bots sometimes alerts operators they've been detected. The false links contain meta directives to prevent search engine indexing while remaining attractive to data-scraping bots.


Sva vremena su GMT +2. Sada je 18:58.

Powered by vBulletin®
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
© 1999-2024 PC Ekspert - Sva prava pridržana ISSN 1334-2940
Ad Management by RedTyger