• stupidcasey@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Lol, knowing the post processing done with your IPhone this whole thing sounds like an actual joke, does no one remember the fake moon incident? Your photos have been Ai generated for years and no one noticed, no algorithm on earth could tell the difference between a phone photo and an Ai photo because they are the same thing.

      • stupidcasey@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        22 hours ago

        You absolutely missed everything, the moon is fake literally… when you take a picture of the moon your camera uses AI photo manipulation to change your garage picture to a completely Ai generated image because taking pictures of the moon is actually pretty difficult so it makes pictures look much better and in %99 of cases it is better but in edge cases like trying to take a picture of something flying in front of the moon like the ISS or a cloud it is not, also it may cause issues if you try to introduce your photos in court because everything you take is inherently doctored.

        • xenoclast@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          21 hours ago

          Huh. I thought that was just based on promo “Space zoom” photos from Samsung and it never made it into the wild.

  • Rob200@lemmy.autism.place
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    Not sure how to fel about this, but if they are honest about the labels and accurate 100% of the time with labeling it’s a nice feature for independant fact checkers

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    You may be able to prove that a photo with certain metadata was taken by a camera (my understanding is that that’s the method), but you can’t prove that a photo without it wasn’t, because older cameras won’t have the necessary support, and wiping metadata is trivial anyway. So is it better to have more false negatives than false positives? Maybe. My suspicion is that it won’t make much difference to most people.

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 day ago

      Even if you assume the images you care about have this metadata, all it takes is a hacked camera (which could be as simple as carefully taking a photo of your AI-generated image) to fake authenticity.

      And the vast majority of images you see online are heavily compressed so it’s not 6MB+ per image for the digitally signed raw images.

        • WolfLink@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 day ago

          It’s not that simple. It’s not just a “this is or isn’t AI” boolean in the metadata. Hash the image, then sign the hash with digital signature key. The signature will be invalid if the image has been tampered with, and you can’t make a new signature without the signing key.

          Once the image is signed, you can’t tamper with it and get away with it.

          The vulnerability is, how do you ensure an image isn’t faked before it gets to the signature part? On some level, I think this is a fundamentally unsolvable problem. But there may be ways to make it practically impossible to fake, at least for the average user without highly advanced resources.

          • cmnybo@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            22 hours ago

            Cameras don’t cryptographically sign the images they take. Even if that was added, there are billions of cameras in use that don’t support signing the images. Also, any sort of editing, resizing, or reencoding would make that signature invalid. Almost no one is going to post pictures to the web without any sort of editing. Embedding 10+ MB images in a web page is not practical.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      A fair few sites will also wipe metadata for safety reasons, since photo metadata can include things like the location where the photo was taken.

  • restingboredface@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    It’s of course troubling that AI images will go unidentified through this service (I am also not at all confident that Google can do this well/consistently).

    However I’m also worried about the opposite side of this problem- real images being mislabeled as AI. I can see a lot of bad actors using that to discredit legitimate news sources or stories that don’t fit their narrative.

  • Dagamant@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    I watched a video on methods for detecting AI generation in images. One of the methods was comparing the noise on different color channels. Cameras have different noise in different channels while AI doesn’t. There is also stuff like JPG compression artifacts in other image formats.

    So there are technical solutions to it but I wouldn’t know how to automate them.

  • apfelwoiSchoppen@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    Google is planning to roll out a technology that will identify whether a photo was taken with a camera, edited by software like Photoshop, or produced by generative AI models.

    So they are going to use AI to detect AI. That should not present any problems.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          Yes, it’s called a GAN and has been a fundamental technique in ML for years.

            • FatCrab@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 day ago

              My point is just that they’re effectively describing a discriminator. Like, yeah, it entails a lot more tough problems to be tackled than that sentence makes it seem, but it’s a known and very active area of ML. Sure, there may be other metadata and contextual features to discriminate upon, but eventually those heuristics will inevitably be closed up and we’ll just end up with a giant distributed, quasi-federated GAN. Which, setting aside the externalities that I’m skeptical anyone in a position of power to address is equally in an informed position of understanding, is kind of neat in a vacuum.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    looks dubious

    The problem here is that if this is unreliable – and I’m skeptical that Google can produce a system that will work across-the-board – then you have a synthesized image that now has Google attesting to be non-synthetic.

    • xenoclast@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      21 hours ago

      Fun fact about AI products (or any gold rush economy) it doesn’t have to work. It just has to sell.

      I mean this is generally true about anything but it’s particularly bad in these situations. Also PT Barnum had a few thoughts on this as well.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      The problem here is that if this is unreliable…

      And the problem if it is reliable is that everyone becomes dependent on Google to literally define reality.

    • SchmidtGenetics@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      I guess this would be a good reason to include some exif data when images are hosted on websites, one of the only ways to tell an image is true from my little understanding.

        • SchmidtGenetics@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 day ago

          include some EXIF data

          Thats what I said.

          Date, device, edited. That can all be included, location doesn’t need to be.

            • SchmidtGenetics@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 day ago

              To prove the legibility of the image? It’s a great data point that’s pretty anonymous, they don’t need to include the Mac, sim, serial or other information.

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 day ago

                A. It’s not even the weakest of weak evidence of whether a photo is legitimate. It tells you literally zero.

                B. Even if it was concrete proof, that would still be a truly disgusting reason to think you were entitled to that information.

                • SchmidtGenetics@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  1 day ago

                  You can use metadata to prove an image is real, you can’t prove something is real without it, so it’s the only current option. It tells you a lot, you just don’t want people to know it apparently, but that doesn’t change it can be used to legitimatize an image.

                  What’s disgusting about knowing if an image was taken on a Sony dslr, and Android or an iPhone? And entitled…? This is so you can prove your image is real? The hell you talking about here?

          • CatsGoMOW@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            24 hours ago

            It seems like you’re assuming that file modified times are fixed…? Every piece of metadata like that can be altered. If you took a picture and posted it somewhere, I could take it and alter it to my liking, then add in some fake exif data as well as make it look like I modified the image before your actual original version.

            You can’t use any of that metadata to prove anything.

            • SchmidtGenetics@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              23 hours ago

              No, but it seems like you’re assuming they would look at this sandboxed by itself…? Of course there is more than one data point to look at, when you uploaded the image would noted, so even if you uploaded an image with older exif data, so what? The original poster would still have the original image, and the original image would have scraped and documented when it was hosted. So you host the image with fake data later, and it compares the two and sees that your fake one was posted 6 months later, it gets flagged like it should. And the original owner can claim authenticity.

              Metadata provides a trail and can be used with other data points to show authenticity when a bad actor appears for your image.

              You are apparently assuming to be looking at a single images exif data to determine what? Obviously they would use every image that looks similar or matches identical and use exif data to find the real one. As well as other mentioned methods.

              The only vector point is newly created images that haven’t been digitally signed, anything digitally signed can be verified as new, unless you go to extreme lengths to fake and image and than somehow recapture it with a digitally signed camera without it being detected fake by other methods….