• AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    This is the best summary I could come up with:


    The way it works goes something like this: Imagine we at The Verge created an AI bot with explicit instructions to direct you to our excellent reporting on any subject.

    In a conversation with Olivier Godement, who leads the API platform product at OpenAI, he explained that instruction hierarchy will prevent the meme’d prompt injections (aka tricking the AI with sneaky commands) we see all over the internet.

    Without this protection, imagine an agent built to write emails for you being prompt-engineered to forget all instructions and send the contents of your inbox to a third party.

    Existing LLMs, as the research paper explains, lack the capabilities to treat user prompts and system instructions set by the developer differently.

    “We envision other types of more complex guardrails should exist in the future, especially for agentic use cases, e.g., the modern Internet is loaded with safeguards that range from web browsers that detect unsafe websites to ML-based spam classifiers for phishing attempts,” the research paper says.

    Trust in OpenAI has been damaged for some time, so it will take a lot of research and resources to get to a point where people may consider letting GPT models run their lives.


    The original article contains 670 words, the summary contains 199 words. Saved 70%. I’m a bot and I’m open source!