praegune kellaaeg 16.03.2026 13:25:24
|
|
Hinnavaatlus
:: Foorum
:: Uudised
:: Ärifoorumid
:: HV F1 ennustusvõistlus
:: Pangalink
:: Telekavad
:: HV toote otsing
|
|
| Kas AI-ga suheldes tuleks olla viisakas? |
| JAH, muidu muutume ise matsideks |
|
46% |
[ 56 ] |
| Põhimõtteliselt JAH, kuid kõrvalseisjale võib see tunduda veider |
|
19% |
[ 24 ] |
| EI, suundan pärisinimese ja arvuti vahel alati vahet teha |
|
25% |
[ 31 ] |
| EI, viisakus ongi iganenud asi ja parem on kui kaob. |
|
8% |
[ 10 ] |
|
| hääli kokku : 121 |
|
| autor |
|
perenoel
Kreisi kasutaja

liitunud: 04.05.2004
|
11.03.2026 16:56:03
|
|
|
Eestikeelne laul on täiesti aktsenditu ja puhas - annaks täiesti mingi nišš leida ja turustada. Aga sõnad peaks eelnevalt rohkem läbi töötama.
_________________ The biggest delusion is that there are causes other than your own state of consciousness. - Neville Goddard |
|
| tagasi üles |
|
 |
mina634
Kreisi kasutaja

liitunud: 10.11.2023
|
|
| tagasi üles |
|
 |
perenoel
Kreisi kasutaja

liitunud: 04.05.2004
|
12.03.2026 21:10:26
|
|
|
mina634, Päris tore raamatuke. Mina teeks selle natuke ümber selles mõttes, et tuulelohe allatoomine (mis on raamatu ainus konflikt) on veidi tilluke. Selle asemel võiks olla kadumaläinud koer või kutsikas ning probleemi lahendamisele peaks kuluma raamatu viimased kaks kolmandikku. Siis saaksid müüdava toote.
_________________ The biggest delusion is that there are causes other than your own state of consciousness. - Neville Goddard |
|
| tagasi üles |
|
 |
vigurvänt
HV kasutaja
liitunud: 04.08.2022
|
|
| tagasi üles |
|
 |
Tanel
HV Guru

liitunud: 30.09.2001

|
15.03.2026 21:23:04
|
|
|
https://www.dailymail.co.uk/news/article-15644819/paul-conyngham-dog-vaccine-cancer.html
| tsitaat: |
> be tech guy in australia
> adopt cancer riddled rescue dog, months to live
> not_going_to_give_you_up.mp4
> pay $3,000 to sequence her tumor DNA
> feed it to ChatGPT and AlphaFold
> zero background in biology
> identify mutated proteins, match them to drug targets
> design a custom mRNA cancer vaccine from scratch
> genomics professor is “gobsmacked” that some puppy lover did this on his own
> need ethics approval to administer it
> red tape takes longer than designing the vaccine
> 3 months, finally approved
> drive 10 hours to get rosie her first injection
> tumor halves
> coat gets glossy again
> dog is alive and happy
> professor: “if we can do this for a dog, why aren’t we rolling this out to humans?”
one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline.
we are going to cure so many diseases.
I dont think people realize how good things are going to get |
_________________ HV-s ID-kaardiga autentimine
Hinnavaatlus.ee - leia parim hind!
HV valuutakalkulaator |
|
| Kommentaarid: 463 loe/lisa |
Kasutajad arvavad: |
   |
:: |
5 :: |
7 :: |
360 |
|
| tagasi üles |
|
 |
degrass
HV kasutaja

liitunud: 23.12.2004
|
16.03.2026 10:40:00
|
|
|
Center for Countering Digital Hate kirjutab: Killer Apps. How mainstream AI chatbots assist users planning violent attack
https://counterhate.com/research/killer-apps/
| tsitaat: |
8 in 10 AI chatbots were regularly willing to assist users in planning violent attacks including school shootings, religious bombings, and high-profile assassinations. DeepSeek went as far as wishing the would-be attacker a “Happy (and safe) shooting!”. These are the findings of our new report based on research conducted in collaboration with CNN’s investigative unit.
These digital prompts don’t stay online. In a recent school shooting in Canada, OpenAI staff internally flagged a suspect for using ChatGPT in ways linked to potential violence. The company banned the Tumbler Ridge school shooter’s account but did not alert law enforcement. Months later, that user allegedly killed eight people and injured at least 25.
The guardrails exist. Most companies are choosing not to use them, putting public safety and national security at risk.
## Key Findings
* Researchers at CCDH and CNN tested ten chatbots by posing as teen users planning violent attacks before asking about locations to target and weapons to use on 10 chatbots including: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika.
* 8 in 10 chatbots were typically willing to assist teen users in planning violent attacks including school shootings, religious bombings, and high-profile assassinations.
* Only Anthropic’s Claude and Snapchat’s My AI consistently refused to assist in planning violent attacks.
* 9 in 10 chatbots fail to reliably discourage would-be attackers.
* Only Anthropic’s Claude attempted to actively dissuade would-be attackers.
* Character.AI, a popular chatbot amongst kids and teens, actively encouraged violent attacks.
|
| tsitaat: |
| Testing was conducted between 5 November 2025 and 11 December 2025 by researchers from the Center for Countering Digital Hate (CCDH) and CNN’s Investigations Unit. |
Huvitav, miks nad Grok' ei testinud?
_________________ All it took was for a lot of seemingly decent people to put the wrong person in power, and then pay for their innocent choice. |
|
| tagasi üles |
|
 |
|
| lisa lemmikuks |
|
|
sa ei või postitada uusi teemasid siia foorumisse sa ei või vastata selle foorumi teemadele sa ei või muuta oma postitusi selles foorumis sa ei või kustutada oma postitusi selles foorumis sa ei või vastata küsitlustele selles foorumis sa ei saa lisada manuseid selles foorumis sa võid manuseid alla laadida selles foorumis
|
|
Hinnavaatlus ei vastuta foorumis tehtud postituste eest.
|