內容警告:微劇透
蟲蟲坐騎待著不走,真的好哀傷啊(想到小狗)
Articles
from Feedly
A small number of samples can poison LLMs of any size
In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a "backdoor" vulnerability in a large language model—…
LLMs Reproduce Human Purchase Intent via Semantic Similarity Elicitation of Likert Ratings
arXiv:2510.08338v1 Announce Type: new Abstract: Consumer research costs companies billions annually yet suffers from panel biases and limited scale. Large language models (LLMs) offer an alternative…

jimmy