ChatGPT Images 2.0

(openai.com)

805 points | by wahnfrieden 15 hours ago ago

475 comments

  • lionkor an hour ago ago

    Every cent you spend on this, remember: The people who made this possible are not even getting a millionth of a cent for every billion USD made with it (they are getting nothing). Same with code; that code you spent years pouring over, fixing, etc. is now how these companies make so much money and get so much investment. It's like open source, except you get shafted.

    • sp_c 17 minutes ago ago

      I don't understand why everyone is all up and arms about Images / Art being generated by AI, but when it comes to code... well who cares? The people who made all the code training data are also getting nothing!

      Potentially the one difference is that developers invented this and screwed themselves, whereas artists had nothing to do with AI.

      • happymellon 3 minutes ago ago

        > Potentially the one difference is that developers invented this and screwed themselves

        Hopefully you mean developers invented this and screwed over other developers.

        How many folks working on the code at OpenAI have meaninfully contributed to Open Source? I agree that because it is the same "job title" people might feel less sympathy but it's not the same people.

      • sandworm101 4 minutes ago ago

        Because artists generally own thier material (with exceptions at the very high end) whereas professional coders have generally abandoned ownership by seeding it as "work product" to thier employers. Copy my drawings and you steal from me, a person. Copy a bit of code or a texture pack from a game and you steal from whatever private equity owns that game studio. Private equity doesnt have feelings to hurt.

        • billynomates 2 minutes ago ago

          Arent't the models trained on open source code though? In which case OpenAI et al should be following the licenses of the code on which they are trained.

    • hk__2 2 minutes ago ago

      This is the same for all human writing since the beginning. The people who wrote the books who inspired your favorite author don’t get a cent when you buy her new book.

    • barnabee 26 minutes ago ago

      A lot of people here aren't going to like it, but the only reasonable way out I can see is to eventually socialise ownership and control of AI.

      I don't see an alternative that isn't really bad.

      • master-lincoln 24 minutes ago ago

        Seize the means of production!

        • barnabee 20 minutes ago ago

          I'll be satisfied if we just manage to seize the means of our otherwise impending servitude under corporate techno-fascism…

      • user34283 6 minutes ago ago

        I figure capitalism may soon become obsolete. But I don’t think this speculation is going to make for interesting discussion on here.

        I find the technical discussion more interesting and could do without some of the moral grandstanding in the comments.

    • bradley13 10 minutes ago ago

      If you put stuff on the internet, people (and machines) can see it. How do you think human artists learn? By looking at other people's artwork. AI can do exactly the same thing.

      As for code: All of my code is open source. I don't care if people (or machines) learn from it. In fact, as a teacher, I sincerely hope that they do!

      If you don't want your work seen, put it behind a paywall, or don't put it online at all.

    • rolymath 10 minutes ago ago

      That's fine for me. As someone who can't draw or design for shit, I am getting effectively millions of dollars worth of artist time for $20/month.

      The solution is to socialize AI, not ban it.

  • minimaxir 10 hours ago ago

    So during my Nano Banana Pro experiments I wrote a very fun prompt that tests the ability for these image generation models to follow heuristics, but still requires domain knowledge and/or use of the search tool:

        Create a 8x8 contiguous grid of the Pokémon whose National Pokédex numbers correspond to the first 64 prime numbers. Include a black border between the subimages.
    
        You MUST obey ALL the FOLLOWING rules for these subimages:
        - Add a label anchored to the top left corner of the subimage with the Pokémon's National Pokédex number.
          - NEVER include a `#` in the label
          - This text is left-justified, white color, and Menlo font typeface
          - The label fill color is black
        - If the Pokémon's National Pokédex number is 1 digit, display the Pokémon in a 8-bit style
        - If the Pokémon's National Pokédex number is 2 digits, display the Pokémon in a charcoal drawing style
        - If the Pokémon's National Pokédex number is 3 digits, display the Pokémon in a Ukiyo-e style
    
    The NBP result is here, which got the numbers, corresponding Pokemon, and styles correct, with the main point of contention being that the style application is lazy and that the images may be plagiarized: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...

    Running that same prompt through gpt-2-image high gave an...interesting contrast: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...

    It did more inventive styles for the images that appear to be original, but:

    - The style logic is by row, not raw numbers and are therefore wrong

    - Several of the Pokemon are flat-out wrong

    - Number font is wrong

    - Bottom isn't square for some reason

    Odd results.

    • MrManatee 2 hours ago ago

      Prompts like this feel like it's using the wrong abstraction. The "obvious" thing to do with something like this would be to generate some code that generates the image and then run that code.

      Inspired by this, I tried something much simpler. I asked it to draw 12 concentric circles. With three tries it always drew 10 instead. https://chatgpt.com/share/69e87d08-5a14-83eb-9a3b-3a8eb14692...

    • dvt 8 hours ago ago

      This is an amazing test and it's kinda' funny how terrible gpt-2-image is. I'd take "plagiarized" images (e.g. Google search & copy-paste) any day over how awful the OpenAI result is. Doesn't even seem like they have a sanity checker/post-processing "did I follow the instructions correctly?" step, because the digit-style constraint violation should be easily caught. It's also expensive as shit to just get an image that's essentially unusable.

      • the_arun 7 hours ago ago
        • fblp 5 hours ago ago

          Did it correctly follow the instructions? Don't know my pokemon well enough.

          • minimaxir 5 hours ago ago

            Essentially yes (bottom got distorted), but Gemini uses Nano Banana Pro or Nano Banana 2 so it's not a surprising result. The image I linked uses the raw API.

            • thih9 2 hours ago ago

              Note that the styles are different; there are two digit images rendered in color.

              Color charcoal drawings do exist, but it’s not what’s usually meant by “charcoal drawing”.

      • anshumankmr 7 hours ago ago

        that is interesting cause I feel gpt-image-1 did have that feature.

        (source: https://chatgpt.com/share/69e83569-b334-8320-9fbf-01404d18df...)

        • weird-eye-issue 5 hours ago ago

          You are comparing ChatGPT to a raw image model. These are two completely different things. ChatGPT takes your input, modifies the prompt and then passes it to the image model and then will maybe read the image and provide output. The image model like through the API just takes the prompt verbatim and generates an image.

          • minimaxir 5 hours ago ago

            Nano Banana Pro and ChatGPT Images 2.0 also tweak the prompt because they can think.

            • weird-eye-issue 5 hours ago ago

              Yes exactly, "ChatGPT Images 2.0" is in ChatGPT. That is not a model.

      • hyperadvanced 7 hours ago ago

        I wouldn’t say it’s terrible. I wouldn’t say it’s a huge step forward in terms of quality compared to what I’ve seen before from AI

    • AussieWog93 an hour ago ago

      For what it's worth, NBP made some mistakes too.

      Artistic oddities aside (why are the 8-bit sprites 16-bit, why do the charcoal drawings have colour, why does the art of specifically the Gen 1 Pokemon look so off.), 271 is Lombre, not Lotad.

    • Palmik an hour ago ago

      I do not think this is a good prompt or useful benchmark, but nonetheless, it seems to work better for me: https://chatgpt.com/share/69e88a94-ded8-8395-b5dc-abceb2f44d...

    • vincentbuilds 3 hours ago ago

      banana Pro gets the logic and punts on the art; gpt-2-image gets the art and punts on the logic. Feels like instruction-following and creativity sit on opposite ends of the same slider.

      • dieortin 10 minutes ago ago

        This feels incredibly AI generated

    • pfortuny an hour ago ago

      Just try a 23-sided plane convex polygon.

    • rrr_oh_man 9 hours ago ago

      Why would you consider this a good prompt?

      • minimaxir 9 hours ago ago

        Because both Nano Banana Pro and ChatGPT Images 2.0 have touted strong reasoning capabilities, and this particular prompt has more objective, easy-to-validate criteria as opposed to the subjective nature of images.

        I have more subjective prompts to test reasoning but they're your-mileage-may-vary (however, gpt-2-image has surprisingly been doing much better on more objective criteria in my test cases)

    • razorbeamz an hour ago ago

      Neither of them drew them in an 8-bit style either. It's way too many colors.

      • dodslaser an hour ago ago

        Maybe they're so advanced they learned to write to the palette registers mid-scanline.

    • Razengan 5 hours ago ago

      Even a few months ago, ChatGPT/Sora's image generation performed better than Gemini/Nano Banana for certain weird prompts:

      Try things like: "A white capybara with black spots, on a tricycle, with 7 tentacles instead of legs, each tentacle is a different color of the rainbow" (paraphrased, not the literal exact prompt I used)

      Gemini just globbed a whole mass of tentacles without any regards to the count

    • m3kw9 4 hours ago ago

      Prob a very unscientific way to test an image model. This would me likely because they have the reasoning turned down and let its instant output takeover

      • minimaxir 4 hours ago ago

        There's no good scientific way to test a closed-source model with both nondeterministic and subjective output.

        This example image was generated using the API on high, not the low reasoning version. (it is slow and takes 2 minutes lol)

      • crustaceansoup 4 hours ago ago

        If the results are quantifiable/objective and repeatable it's scientific, how is it not scientific?

        The reasoning amount is part of the evaluation isn't it?

      • TeMPOraL 3 hours ago ago

        This is the best kind of science there is: direct, empirical test.

  • parasti 4 hours ago ago

    A great technical achievement, for sure, but this is kind of the moment where it enters uncanny valley to me. The promo reel on the website makes it feel like humans doing incredible things (background music intentionally evokes that emotion), but it's a slideshow of computer generatated images attempting to replicate the amazing things that humans do. It's just crazy to look at those images and have to consciously remind myself - nobody made this, this photographed place and people do not exist, no human participated in this photo, no human traced the lines of this comic, no human designer laid out the text in this image. This is a really clever amalgamation machine of human-based inputs. Uncanny valley.

    • qnleigh an hour ago ago

      No this is what life looks like on the other side of the uncanny valley. The images don't look creepy because they look artificial or wrong. They're a reminder of a creepy new reality where our eyes can no longer tell us what's real.

  • simonw 14 hours ago ago

    I've been trying out the new model like this:

      OPENAI_API_KEY="$(llm keys get openai)" \
        uv run https://tools.simonwillison.net/python/openai_image.py \
        -m gpt-image-2 \
        "Do a where's Waldo style image but it's where is the raccoon holding a ham radio"
    
    Code here: https://github.com/simonw/tools/blob/main/python/openai_imag...

    Here's what I got from that prompt. I do not think it included a raccoon holding a ham radio (though the problem with Where's Waldo tests is that I don't have the patience to solve them for sure): https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a...

    • simonw 14 hours ago ago

      I just got a much better version using this command instead, which uses the maximum image size according to https://github.com/openai/openai-cookbook/blob/main/examples...

        OPENAI_API_KEY="$(llm keys get openai)" \
          uv run 'https://raw.githubusercontent.com/simonw/tools/refs/heads/main/python/openai_image.py' \
          -m gpt-image-2 \
          "Do a where's Waldo style image but it's where is the raccoon holding a ham radio" \
          --quality high --size 3840x2160
      
      https://gist.github.com/simonw/88eecc65698a725d8a9c1c918478a... - I found the raccoon!

      I think that image cost 40 cents.

      • makira 13 hours ago ago

        Fed into a clear Claude Code max effort session with : "Inspect waldo2.png, and give me the pixel location of a raccoon holding a ham radio.". It sliced the image into small sections and gave:

        "Found the raccoon holding a ham radio in waldo2.png (3840×2160).

          - Raccoon center: roughly (460, 1680)                                                                                            
          - Ham radio (walkie-talkie) center: roughly (505, 1650) — antenna tip around (510, 1585)                                         
          - Bounding box (raccoon + radio): approx x: 370–540, y: 1550–1780                                                                
                                                                                                                                           
          It's in the lower-left area of the image, just right of the red-and-white striped souvenir umbrella, wearing a green vest. "
        
        Which is correct!
        • cwillu 13 hours ago ago

          I had one problem: finding the raccoon. Now I have two: finding the red-and-white striped souvenir umbrella, and finding the raccoon.

          • makira 13 hours ago ago

            simonw posted 2 different images: make sure to look at the second one.

            • cwillu 13 hours ago ago

              Yeah, I noticed that just now, but too late to delete the comment :p

              • jaggederest 11 hours ago ago

                You had a meta problem, and three, in total: find the raccoon, find the umbrella, find the right link in the comments.

        • M3L0NM4N 10 hours ago ago

          We would need a larger sample size than just myself, but the raccoon was in the very first spot I looked. Found it literally immediately, as if that's where my eyes naturally gravitated to first. Hopefully that's just luck and not an indictment of the image-creating ability, as if there is some element missing from this "Where's Waldo" image, that would normally make Waldo hard to find.

          • nerdsniper 9 hours ago ago

            There seemed to be more space around the raccoon than most other subjects. Zoomed out it appears as almost a “halo” highlighting the raccoon.

      • prmoustache 3 hours ago ago

        Funny how it can look convincing from far away but once you zoom in you find out most characters have a mix of leprosy and skin cancer.

      • wewtyflakes 10 hours ago ago

        A startling number of people either have no arms, one arm, a half of an arm, or a shrunken arm; how odd!

        • rattlesnakedave 8 hours ago ago

          To be fair, the average person has fewer than two arms.

          • cozzyd 4 hours ago ago

            Most people have an ARM in their pockets, nowadays. And possibly on their wrist.

          • floodfx 7 hours ago ago

            Haha. Underrated comment!

        • ehnto 2 hours ago ago

          There id a leg that sprouts into part of bush, perhaps that's where people's legs are disappearing to.

        • cozzyd 7 hours ago ago

          This is why they're congregating around the first aid and the lost and found

        • globular-toast 4 hours ago ago

          Finding the raccoon was instant. Finding all the weird AI artifacts is more fun. It's quite fascinating really. As usual it looks impressive at a glance but completely falls apart on closer inspection. I also didn't find any jokes, unless maybe the bridge to nowhere or finger posts pointing both ways counts?

      • davebren 14 hours ago ago

        The faces...that's nice that it turned a kid's book into an abomination

        • Filligree 9 hours ago ago

          By image generation standards this is a ridiculously good result. No surprise that people instantly find the new limits, but they are new limits.

          • globular-toast 4 hours ago ago

            But it's also straight up plagiarism and still ridiculously bad on so many levels.

          • davebren 8 hours ago ago

            It could already copy the art styles from its training data, what is the advancement here?

        • vaulstein 5 hours ago ago

          It's interesting that the raccoon is well defined because it was a part of the request. But none of the other Fauna are.

        • keithnz 7 hours ago ago

          it's interesting, zoomed out it kind of looks ok, zoomed in.... oh my.

      • jdironman 6 hours ago ago

        The real NFTs where the images we generated along the way

      • louiereederson 14 hours ago ago

        The people in this image remind me of early this person does not exist, in the best way

        • dfee 11 hours ago ago

          fair point, also "this raccoon does not exist"

      • gpt5 11 hours ago ago

        I tried it on the ChatGPT web UI and it also worked, although the ham radio looks like a handbag to me.

        https://postimg.cc/wyxgCgNY

        • luxpir 3 hours ago ago

          Nice, enjoyed the image as someone who has been to the events. But also easy raccoon placement :)

        • djmips 4 hours ago ago

          mmmm yummy OSLS?

      • mirekrusin 9 hours ago ago

        Can it generate non halloween version though?

        This lower-is-better danse macabre, nightmares inducing ratio feels like interesting proxy for models capability.

      • ireadmevs 14 hours ago ago

        I found it on the 2nd image! On the 1st one not yet...

      • dzhiurgis 5 hours ago ago

        Cost me < 1 cents - https://elsrc.com/elsrc/waldo/wojak.jpg

        And this medium quality, high resolution https://elsrc.com/elsrc/waldo/10_wojaks.jpg was 13cents

        p.s. aaaand that's soft launch my SaaS above, you can replace wojak.jpg with anything you want and it will paint that. It's basically appending to prompt defined by elsrc's dashboard. Hopefully a more sane way to manage genai content. Be gentle to my server, hn!

      • Barbing 6 hours ago ago

        >I think that image cost 40 cents.

        Kinda made me sad assuming the author didn't license anything to OpenAI.

        I recognize it could revert (99% of?) progress if all the labs moved to consent-based training sets exclusively, but I can't think of any other fair way.

        $.40 does not represent the appropriate value to me considering the desirability of the IP and its earning potential in print and elsewhere. If the world has to wait until it’s fair, what of value will be lost? (I suppose this is where the big wrinkle of foreign open weight models comes in.)

        • rafram 6 hours ago ago

          License what? The concept of a hidden object search? The only stylistic similarity here is the viewing angle. Where’s Waldo comics are flat, brightly colored line drawings that look nothing like this at all.

          • Barbing 4 hours ago ago

            Well, I recognized the style from even the new physical books on sale today, but I don’t know art well enough to use a term like flat.

            I am not an art expert but I’m perhaps a reasonable consumer and there is possibility of confusion if someone sells AI Where’s Waldo knockoff books at the dollar store, maybe until I take a closer look.

    • makira 14 hours ago ago

      > though the problem with Where's Waldo tests is that I don't have the patience to solve them for sure

      I see an opportunity for a new AI test!

      • vunderba 13 hours ago ago

        There have already been several attempts to procedurally generate Where’s Waldo? style images since the early Stable Diffusion days, including experiments that used a YOLO filter on each face and then processed them with ADetailer.

        It's a difficult test for genai to pass. As I mentioned in a different thread, it requires a holistic understanding (in that there can only be one Waldo Highlander style), while also holding up to scrutiny when you examine any individual, ordinary figure.

      • simonw 14 hours ago ago

        I've actually been feeding them into Claude Opus 4.7 with its new high resolution image inputs, with mixed results - in one case there was no raccoon but it was SURE there was and told me it was definitely there but it couldn't find it.

    • halamadrid 5 hours ago ago

      Really hard to look at these images given how not human like the humans are. A few are ok, but a lot are disfigured or missing parts and its hard to find a raccoon in here.

    • marricks 10 hours ago ago

      Like... this has things that AI will seemingly always be terrible at?

      At some point the level of detail is utter garbo and always will be. An artist who was thoughtful could have some mistakes but someone who put that much time into a drawing wouldn't have:

      - Nightmarish screaming faces on most people

      - A sign that points seemingly both directions, or the incorrect one for a lake and a first AID tent that doesn't exist

      - A dog in bottom left and near lake which looks like some sort of fuzzy monstrosity...

      It looks SO impressive before you try to take in any detail. The hand selected images for the preview have the same shit. The view of musculature has a sternocleidomastoid with no clavicle attachment. The periodic table seems good until you take a look at the metals...

      We're reconfiguring all of our RAM & GPUs and wasting so much water and electricity for crappier where's Waldos??

      • p1esk 9 hours ago ago

        AI will seemingly always be ...

        You do realize that the whole image generation field is barely 10 years old?

        I remember how I was able to generate mnist digits for the first time about 10 years ago - that seemed almost like magic!

    • vova_hn2 8 hours ago ago

      Thanks for the image, I will see their faces in my nightmares.

      • vunderba 8 hours ago ago

        This happens all too frequently when you ask a GenAI model to create an image with a large crowd especially a “Where’s Waldo?” style scenes, where by definition you’re going to be examining individual faces very closely.

      • hackable_sand 6 hours ago ago

        What about the faces of the people ChatGPT killed?

    • pants2 14 hours ago ago

      The second 4K image definitely has a raccoon on the left there! Nice.

    • nerdsniper 9 hours ago ago

      That is a devilishly difficult prompt for current diffusion tasks. Kudos.

    • ritzaco 14 hours ago ago

      haha took me a while to notice that one of the buildings is labelled 'Ham radio'

    • ElFitz 14 hours ago ago

      Damn. There’s a fun game app to make here ^^

      • dymk 9 hours ago ago

        Is there? The moment you look closely at the puzzle (which is... the whole point of Where's Waldo), you notice all the deformities and errors.

        • ElFitz 3 hours ago ago

          Yes, it’s not there yet. But nothing unsolvable. First thing that comes to mind would be generating smaller portion at the same resolution, then expand through tiling (although one might need to use another service & model for this), like we used to do with Stable Diffusion years ago.

          Another option would be generating these large images, splitting them into grids, and using inpainting on each "tile" to improve the details. Basically the reverse of the first one.

          Both significantly increase costs, but for the second one having what Images 2.0 can produce as an input could help significantly improve the overall coherence.

        • amelius an hour ago ago

          Yes sounds more like a fun research project instead.

    • arealaccount 14 hours ago ago

      I see the raccoon

    • tptacek 14 hours ago ago

      5.4 thinking says "Just right of center, immediately to the right of the HAM RADIO shack. Look on the dirt path there: the raccoon is the small gray figure partly hidden behind the woman in the red-and-yellow shirt, a little above the man in the green hat. Roughly 57% from the left, 48% from the top."

      (I don't think it's right).

      • ritzaco 14 hours ago ago

        I tried

        > please add a giant red arrow to a red circle around the raccoon holding a ham radio or add a cross through the entire image if one does not exist

        and got this. I'm not sure I know what a ham radio looks like though.

        https://i.ritzastatic.com/static/ffef1a8e639bc85b71b692c3ba1...

        • jackpirate 14 hours ago ago

          Also, the racoon it circled isn't in the original.

          • Aurornis 14 hours ago ago

            I love how perfectly this captures the difficulties of using generative AI for detection tasks.

            • jetbalsa 8 hours ago ago

              Oh god yes, I've been trying to make a LLM Assisted Magic the Gathering card scanner... its been a hell of a time trying to get it to just OCR card names well....

              • what 7 hours ago ago

                Why would you use an LLM for OCR?

          • angiolillo 14 hours ago ago

            Indeed. I suppose one way to ensure you can find Waldo in any image is to add it yourself.

        • simonw 13 hours ago ago
        • davecahill 7 hours ago ago

          hilarious - i tried and got the same thing.

          there was a very large bear in the first image; when asked to circle the raccoon it just turned the bear into a giant raccoon and circled it.

  • neom 10 hours ago ago

    Here is my regular "hard prompt" I use for testing image gen models:

    "A macro close-up photograph of an old watchmaker's hands carefully replacing a tiny gear inside a vintage pocket watch. The watch mechanism is partially submerged in a shallow dish of clear water, causing visible refraction and light caustics across the brass gears. A single drop of water is falling from a pair of steel tweezers, captured mid-splash on the water's surface. Reflect the watchmaker's face, slightly distorted, in the curved glass of the watch face. Sharp focus throughout, natural window lighting from the left, shot on 100mm macro lens."

    google drive with the 2 images: https://drive.google.com/drive/folders/1-QAftXiGMnnkLJ2Je-ZH...

    Ran a bunch both on the .com and via the api, none of them are nearly as good as Nano Banana.

    (My file share host used to be so good and now it's SO BAD, I've re-hosted with them for now I'll update to google drive link shortly)

    • jcattle 3 hours ago ago

      I mean, your prompt is basically this skit: https://www.youtube.com/watch?v=BKorP55Aqvg ("The Expert" 7 red lines: all strictly perpendicular, some with green ink some with transparent ink)

      I couldn't imagine the image you were describing. I've listed some of the red lines with green ink I've noticed in your prompt:

      Macro Close Up - Sharp throughout

      Focus on tiny gear - But also on tweezers, old watchmakers hand, water drop?

      Work on the mechanism of the watch (on the back of the watch) - but show the curved glass of the watch face which is on the front

      This is the biggest. Even if the mechanism is accessible from the front, you'd have to remove the glass to get to it. It just doesn't make sense and that reflects in the images you get generated. There's all the elements, but they will never make sense because the prompt doesn't make sense.

      • fc417fc802 2 hours ago ago

        The last point (reflection by front glass versus mechanism access so no front glass) is the only issue I see with it. Other than that I can easily visualize an image that satisfies the prompt. I think that the general idea is a good one because it's satisfable while having multiple competing requirements that impose geometric constraints on the scene without providing an immediate solution to said constraints as well as requiring multiple independent features (caustics, reflections, fluid dynamics, refraction, directional lighting) that are quite complicated to get right.

        To illustrate that there aren't any contradictions (other than the final bit about the reflection in the glass). Consider a macro shot showing partial hands, partial tweezers, and pocket watch internals. That's much is certainly doable. Now imagine the partial left hand holding a half submerged pocket watch, fingertips of right hand holding front half of tweezers that are clasping a tiny gear, positioned above the work piece with the drop of water falling directly below. Capture the watchmaker's perspective. I could sketch that so an image model capable of 3D reasoning should have no trouble.

        It's precisely the sort of scene you'd use to test a raytracer. One thing I can immediately think to add is nested dielectrics. Perhaps small transparent glass beads sitting at the bottom of the dish of water with the edge of the pocket watch resting on them, make the dish transparent glass, and place the camera level with the top of the dish facing forward?

        https://blog.yiningkarlli.com/2019/05/nested-dielectrics.htm...

        A second thing I can think to add is a flame. Perhaps place a tealight candle on the far side of the dish, the flame visible through (and distorted by) the water and glass beads?

        • jcattle 2 hours ago ago

          Without the last point with the watch glass it is also easier to imagine for me. Still, you'd have to be selective.

          Do you want it to actually look like macro photography (neither of the generated images do)? Then you can't have it sharp throughout and you won't be able to show the (sharp) watchmakers face in a reflection because it would be on a different focal plane.

          Dropping the macro requirement, you can show a lot more. You can show that the watchmaker is actually old, you can show the reflection, etc.

          Something has to give in the prompt, on multiple of the requirements. The generated images are dropping the macro requirement and are inventing some interesting hinging watch glass contraptions to make sense of it.

          • fc417fc802 2 hours ago ago

            Yeah, fair enough. I figure "macro" sees sufficiently loose use that a model should be able to make sense of it but to get the prompt into perfect shape that ought to be replaced with something like "a closeup showing X, Y, Z in perfect focus". Still the only real problem I see is the aforementioned contradiction regarding the front glass. Short of that single detail an artist could easily satisfy the description as written to well within reason.

    • rrr_oh_man 9 hours ago ago

      Why would you consider this a good prompt?

      • brynnbee 7 hours ago ago

        My observations have been that image generation is especially challenged when asked to do things that are unusual. The fewer instances of something happening it has to train on, the worse it tends to be. Watch repair done in water fits that well - is there a single image on the internet of someone repairing a watch that is partially submerged in water? It also tends to be bad at reflections and consistency of two objects that should be the same.

    • the_lucifer 9 hours ago ago

      Looks like your image host has rate limited viewing the shared images, wanted to give you a heads up

      • neom 9 hours ago ago

        Thanks, I need to get off Zight, they used to be such an nice option for fast file share but they've really suffered some of the worst enshittification I've seen yet.

    • pb7 10 hours ago ago

      Links are broken.

      • waynesonfire 9 hours ago ago

        So.. sign up. "Get Sight for free". Ads everywhere bro.

  • aledevv 3 minutes ago ago

    Only vintage-style images?

  • madrox 11 hours ago ago

    This seems like a great time to mention C2PA, a specification for positively affirming image sources. OpenAI participates in this, and if I load an image I had AI generate in a C2PA Viewer it shows ChatGPT as the source.

    Bad actors can strip sources out so it's a normal image (that's why it's positive affirmation), but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.

    Learn more at https://c2pa.org

    • debazel 7 hours ago ago

      > but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.

      Yes, lets make all images proprietary and locked behind big tech signatures. No more open source image editors or open hardware.

      • henry-j 6 hours ago ago

        C2PA is actually an open protocol, à la SMTP. the whole spec is at https://spec.c2pa.org/, available for anyone to implement.

      • Melatonic 2 hours ago ago

        Why would the image itself have to be proprietary to have some new piece of metadata attached to it ?

    • mdasen 9 hours ago ago

      > Bad actors can strip sources out

      I think the issue is that it's not just bad actors. It's every social platform that strips out metadata. If I post an image on Instagram, Facebook, or anywhere else, they're going to strip the metadata for my privacy. Sometimes the exif data has geo coordinates. Other times it's less private data like the file name, file create/access/modification times, and the kind of device it was taken on (like iPhone 16 Pro Max).

      Usually, they strip out everything and that's likely to include C2PA unless they start whitelisting that to be kept or even using it to flag images on their site as AI.

      But for now, it's not just bad actors stripping out metadata. It's most sites that images are posted on.

      • henry-j 6 hours ago ago

        There’s actually a part of the NY state budget right now (TEDE part X, for my law nerds) that’d require social media companies to preserve non-PII provenance metadata and surface it to the user, if the uploaded image has it.

        linkedin already does this--- see https://www.linkedin.com/help/linkedin/answer/a6282984, and X’s “made with ai” feature preserves the metadata but doesn’t fully surface it (https://www.theverge.com/ai-artificial-intelligence/882974/x...)

      • madrox 9 hours ago ago

        You're implying social platforms aren't bad actors ;)

        In seriousness, social platforms attributing images properly is a whole frontier we haven't even begun to explore, but we need to get there.

    • woadwarrior01 10 hours ago ago

      Yeah, OpenAI has been attaching C2PA manifests to all their generated images from the very beginning. Also, based on a small evaluation that I ran, modern ML based AI generated image detectors like OmniAID[1] seem to do quite well at detecting GPT-Image-2 generated images. I use both in an on-device AI generated image detector that I built.

      [1]: https://arxiv.org/abs/2511.08423

    • paradoxyl 3 hours ago ago

      What a dystopian, pro-tyranny ask. Horrifying.

  • swalsh 11 hours ago ago

    Been using the model for a few hours now. I'm actually reall impressed with it. This is the first time i've found value in an image model for stuff I actually do. I've been using it to build powerpoint slides, and mockups. It's CRAZY good at that.

    • johnwheeler 8 hours ago ago

      Yeah, it's funny. I would expect to see more enthusiasm versus just basic run-of-the-mill, "oh, there it is". Leave it to the HN crowd. This is incredible. I don't even like OpenAI.

      • pembrook 2 hours ago ago

        HN is engineer heavy so its a bunch of people who spend their days looking at code. If it's not a coding model they'll likely never use it.

        To the average HN'er, images and design are superfluous aesthetic decoration for normies.

        And for those on HN who do care about aesthetics, they're using Midjourney, which blows any GPT/Gemini model out of the water when it comes to taste even if it doesn't follow your prompt very well.

        The examples given on this landing page are stock image-esque trash outside of the improvements in visual text generation.

  • AltruisticGapHN 38 minutes ago ago

    This is insanely good. But wow, prompting to get any one of these images is way more complicated than prompting Claude Code. There is a ton of vocabulary that comes with it relating to the camera, the lighting, the mood etc.

  • justani 4 hours ago ago

    I have a few cases where nano banana fails all the time, even gpt image 2 is failing.

    A 3 * 3 cube made out of small cubes, with a small 2 * 2 cube removed from it - https://chatgpt.com/share/69e85df6-5840-83e8-b0e9-3701e92332...

    Create a dot grid containing a rectangle covering 4 dots horizontally and 3 dots vertically - https://chatgpt.com/share/69e85e4b-252c-83e8-b25f-416984cf30...

    One where Nano banana fails but gpt image 2 worked: create a grid from 1 to 100 and in that grid put a snake, with it's head at 75 and tail at 31 - https://chatgpt.com/share/69e85e8b-2a1c-83e8-a857-d4226ba976...

  • skybrian 11 hours ago ago

    This time it passed the piano keyboard test:

    https://chatgpt.com/s/m_69e7ffafbb048191b96f2c93758e3e40

    But it screwed up when attempting to label middle C:

    https://chatgpt.com/s/m_69e8008ef62c8191993932efc8979e1e

    Edit: it did fix it when asked.

  • schneehertz 9 hours ago ago

    Generating a 4096x4096 image with gemini-3.1-flash-image-preview consumes 2,520 tokens, which is equivalent to $0.151 per image.

    Generating a 3840x2160 image with gpt-image-2 consumes 13,342 tokens, which is equivalent to $0.4 per image.

    This model is more than twice as expensive as Gemini.

  • porphyra 11 hours ago ago

    The improvement in Chinese text rendering is remarkable and impressive! I still found some typos in the Chinese sample pic about Wuxi though. For example the 笼 in 小笼包 was written incorrectly. And the "极小中文也清晰可读" section contains even more typos although it's still legible. Still, truly amazing progress. Vastly better than any previous image generation model by a large margin.

    • Lucasoato 9 hours ago ago

      Is this even better than Chinese models? I suppose they focus much more on that aspect, simply because their training data might include many more examples of Chinese text.

      • Ladioss an hour ago ago

        Maybe they just use Qwen Image under the hood ;p

  • amunozo 14 hours ago ago

    This is not as exciting as previous models were, but it is incredibly good. I am starting to think that expressing thoughts in words clearly is probably the most important and general skill of the future.

    • bamboozled 16 minutes ago ago

      In other words, communication is an important skill.

    • aulin 4 hours ago ago

      Well that was probably the most important general skill even before this.

      • sigmoid10 an hour ago ago

        Perhaps for managers. But for everyone actually doing something, you used to need technical proficiency with tools. Now AI is becoming the universal tool.

    • echelon 11 hours ago ago

      > I am starting to think that expressing thoughts in words clearly is probably the most important and general skill of the future.

      Without question.

      AI will be indistinguishable from having a team. Communicating clearly has always and will always mattered.

      This, however, is even stronger. Because you can program and use logic in your communications.

      We're going to collectively develop absolutely wild command over instruction as a society. That's the skill to have.

      • adamhartenz 8 hours ago ago

        How can AI be the amazing thing you say it is, but also too stupid to understand unless you get really good at communicating. Wouldn't better AI just mean it understands your ramblings better?

        • pickleRick243 7 hours ago ago

          It's fine if the "rambling" is logically coherent. So the communication ability isn't really about expressing your thoughts eloquently, but just effectively and clearly. Run on sentences and train of thought is fine as long as you are saying something meaningful. But no AI will be able to read your mind and know exactly what you mean by "make really cool looking website, not lame please, also nice colors, not boring". Declarative programming through natural language will become incredibly powerful.

        • jstanley 3 hours ago ago

          It can't grab out information that isn't there. If your ramblings are ambiguous then it has to make a guess.

        • raincole 7 hours ago ago

          Many humans are great at their expertise but bad at communicating. How?

      • yreg 10 hours ago ago

        On the other hand LLMs are getting very good at understanding poorly constructed instructions as well.

        So being able to express oneself clearly in a structured way may not be such an edge.

        • amunozo 4 hours ago ago

          Yes, I agree, but as one of the other comments say, they are not able to read your mind. So even if the structure and style is not clear, you must be able to express what you want.

  • VA1337 14 minutes ago ago

    So is it better than nano-banana after all?

  • ____tom____ 14 hours ago ago

    No mention of modifying existing images, which is more important than anything they mentioned.

    I think we all know the feeling of getting an image that is ok, but needs a few modifications, and being absolutely unable to get the changes made.

    It either keeps coming up with the same image, or gives you a completely new take on the image with fresh problems.

    Anyone know if modification of existing images is any better?

    Anything better that OpenAI?

    • frmersdog 10 hours ago ago

      Image editing program -> different versions of the image, each with some but not all of the elements you want, on each layer -> mask out the parts you don't need/apply mask, fill with black, soft brush with white the parts you want back in. Copy flattened/merged, drop it back into the image model, keep asking for the changes. As long as each generation adds in an element you want, you can build a collage of your final image.

    • user34283 3 hours ago ago

      It's the first thing I tried, because Nano Banana 2 deteriorates the output with each turn, becoming unusable with just a few edits.

      ChatGPT Images 2.0 made it unusable at the first turn. At least in the ChatGPT app editing a reference image absolutely destroyed the image quality. It perfectly extracted an illustration from the background, but in the process basically turned it from a crisp digital illustration into a blurry, low quality mess.

    • tomjen3 14 hours ago ago

      There was an Edit button in one of the images in the livestream

  • dktp 13 hours ago ago

    One interesting thing I found comparing OpenAI and Gemini image editing is - Gemini rejects anything involving a well known person. Anything. OpenAI is happy to edit and change every time I tried

    I have a sideproject where I want to display standup comedies. I thought I could edit standup comedy posters with some AI to fit my design. Gemini straight up refuses to change any image of any standup comedy poster involving a well know human. OpenAI does not care and is happy to edit away

    • Melatonic 13 hours ago ago

      How does it determine they are well known and not just similar looking?

      • yreg 10 hours ago ago

        Gemini often rejects photos of random people (even ones it generated itself) because it thinks they look too similar to some well known person.

      • dktp 13 hours ago ago

        I don't know tbh. I've tried it on 10-20 various level of famous standups and Gemini refuses every time

        Just for testing, I just tried this https://i.ytimg.com/vi/_KJdP4FLGTo/sddefault.jpg ("Redesign this image in a brutalist graphic design style"). Gemini refuses (api as well as UI), OpenAI does it

        • arjie 13 hours ago ago

          It's not super deterministic but it didn't fail once on my attempts. See: https://imgur.com/a/james-acaster-cold-lasagne-1R7fpzQ

          • dktp 13 hours ago ago

            Very interesting. It fails every single time for me. I'm in Germany, maybe Google is stricter here?

            See https://imgur.com/a/77BRDQv

            • arjie 12 hours ago ago

              That makes sense to me. I just Googled around like a fool and got here https://en.wikipedia.org/wiki/Personality_rights#Germany

              It seems like they're trying to follow local law. What a nightmare to have to manage all jurisdictions around such a product. Surprised it didn't kill image generation entirely.

              • jliptzin 11 hours ago ago

                Yea, especially when they know all that work will be completely pointless in a few years when open source / local models will be just as good and won't have any legal limitations, so people will be generating fake images of famous people like crazy with nothing stopping them

        • Melatonic 13 hours ago ago

          What if you change the prompt to tell it specifically its not a famous person? Or try it without text?

      • BoorishBears 7 hours ago ago

        There are models specifically for detecting well known people https://docs.aws.amazon.com/rekognition/latest/dg/celebritie...

    • vunderba 10 hours ago ago

      Are you using Google Gemini directly? I've found the Vertex API seems to be significantly less strict.

  • elAhmo 20 minutes ago ago

    I am super out of the loop here, what happened with Dall-E?

  • rambojohnson an hour ago ago

    Just tried it and got six fingers and half a thumb on a simple portrait. Mickey Mouse stuff.

  • sanex 7 hours ago ago

    Having the launch website just scrollable generated images is so slick. I love this.

    • gverrilla 5 hours ago ago

      You can click the images too, to see the prompt that got them gen'ed.

  • overgard 10 hours ago ago

    Pretty mixed feelings on this. From the page at least, the images are very good. I'd find it hard to know that they're AI. Which I think is a problem. If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

    I also don't like that these things are trained on specific artist's styles without really crediting those artists (or even getting their consent). I think there's a big difference between an individual artist learning from a style or paying it homage, vs a machine just consuming it so it can create endless art in that style.

    • niek_pas an hour ago ago

      Maybe I'm stupid and naive but I just don't really see how any of this is _fundamentally_ different from Photoshop. Trusting the images you're looking at on the internet has been impossible for a long time. That's why we have institutions and social relations we place trust in instead.

    • kansface 9 hours ago ago

      > If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

      Not a lawyer, but that reads as compelled speech to me. Materially misrepresenting an image would be libel, today, right?

      • overgard 7 hours ago ago

        Well, considering that AI generated content can't be copyrighted (afaik at least), I think we're in very different legal territory when it comes to AI creating things. While it's true that deepfakes could be considered libel.. good luck prosecuting that if you can't even figure out where the image came from.

        The problem is it's all too easy to generate - you can't really do much about an individual piece of slop because there's so much of it. I think we need a way to filter this stuff, societally.

    • bryanhogan 9 hours ago ago

      Trying to watermark or otherwise label them as AI generated is a lost fight, we should assume every image and video we see online may be AI generated.

      • rootusrootus 9 hours ago ago

        This helps the segment of society that is interested in applying critical thinking to what they see. I am not sure that is anything like a majority or even a significant plurality. It seems like just about every image or video gets accused of being AI these days, but predictably the accusations depend on the ideology of the accuser.

    • apsurd 10 hours ago ago

      You might be onto something. I find every image unsettling. they're very good no doubt, but maybe it disturbs me because all of it is a complete copy of what someone else created. I know, I know, there is no pure invention. That's not what i mean. Humans borrow from other humans all the time. There's a humanity in that! A machine fully repurposing a human contribution as some kind of new creation, iono i'm old, it's weird and i don't like it.

      Maybe i'm just bloviating also.

    • drstewart 19 minutes ago ago

      >If we had a functioning congress, I wonder if we might end up with legislation that these things need to be watermarked or otherwise made identifiable as AI generated..

      Can you name any countries that you think are functioning, and what their laws are on watermarked AI images?

  • rambojohnson an hour ago ago

    Just tried it and got the usual six fingers, and half a thumb. What are they actually iterating on with these models by now…

  • squidsoup 10 hours ago ago

    Are camera manufacturers working on signed images? That seems like the only way our trust in any digital media doesn't collapse entirely.

    • randyrand 8 hours ago ago

      Signed images don’t get you much. You can just hardwire the image sensor to a computer and sign raw pixels.

      • Barbing 4 hours ago ago

        Is the situation brighter for a company who owns the hardware and the software, for Apple?

        Taking a picture of an AI generated image aside, theoretically could Apple attest to origin of photos taken in the native camera app and uploaded to iCloud?

        Fascinating, by the way, thank you!

      • wiseowise an hour ago ago

        Make cameras tamper resistant, like POS terminals.

    • Nition 8 hours ago ago

      Ultimately even with that tech, you can still take a photo of an AI generated scene. Maybe coupled with geolocation data in the signature or something it might work.

      • Barbing 4 hours ago ago

        Any thoughts on attempted multiple camera/360 camera solutions? Can make it cost prohibitive to generate exceptional fakes… for a little while

        Kind of like showing the proctor around your room with your webcam before starting the exam.

        I think legacy media stands a chance at coming back as long as they maintain a reputation of deeply verifying images, not being fooled.

      • petesergeant 6 hours ago ago

        I see signing chains as the way to go here. Your camera signs an image, you sign the signed image, your client or editor signs the image you signed etc etc. Might finally have a use for blockchain.

  • lossyalgo 10 hours ago ago

    Someone remind me again why this is a good idea to be able to create perfect fake images?

    • wiseowise 33 minutes ago ago

      Something, something democratization. Because having a skill is inherently oppressive nowadays.

  • PDF_Geek an hour ago ago

    The free tier for ChatGPT feels pretty much nerfed at this point. I’m barely getting 10 prompts in before it drops me down to the basic model. The restrictions are getting ridiculous. Is anyone else seeing this?

  • bensyverson 14 hours ago ago

    I caught the last minute of this—was it just ChatGPT Images 2.0?

  • codebolt 2 hours ago ago

    Anyone test it out for generating 2D art for games? Getting nano banana to generate consistent sprite sheets was seemingly impossible last time i tried a few months ago.

  • baalimago 3 hours ago ago

    "Benchmarks" aside, do anyone actually use these image models for anything?

    • razorbeamz an hour ago ago

      Here in Japan every fucking food truck uses them for pictures of their menu, which really pisses me off because it's not representative of their food at all.

    • medlazik 3 hours ago ago

      Look around? It's everywhere. Try talking to a graphic designer looking for a job theses days. Companies didn't wait for these tools to be good to start using them.

    • croisillon 3 hours ago ago

      MAGA to show how terrible Europe is ;)

  • nickandbro 11 hours ago ago

    200+ points in Arena.ai , that's incredible. They are cleaning house with this model

  • hahahacorn 13 hours ago ago

    One of the images in the blog (https://images.ctfassets.net/kftzwdyauwt9/4d5dizAOajLfAXkGZ7...) is a carbon copy of an image from an article posted Mar 27, 2026 with credits given to an individual: https://www.cornellsun.com/article/2026/03/cornell-accepts-5...

    Was this an oversight? Or did their new image generation model generate an image that was essentially a copy of an existing image?

    • arjie 13 hours ago ago

      That has to be the wrong stock image included or something, bloody hell.

           magick image-l.webp image-r.jpg -compose difference -composite -auto-level -threshold 30% diff.png
      
      It's practically all dark except for a few spots. It's the same image just different size compression whatever. I can't find it in any stock image search, though. Surely it could not have memorized the whole image at that fidelity. Maybe I just didn't search well enough.
      • Melatonic 13 hours ago ago

        Or the image was generated with AI in the first place and a test for Images 2.0

        • IsTom 11 hours ago ago

          Well, it's on web archive. So unless they got their hands on it almost a month early or escaped their light cone it wasn't.

        • arjie 12 hours ago ago

          Haha! That would really take the cake. If it is, congratulations to them! I could never have known.

    • recitedropper 13 hours ago ago

      This is hilarious. Seems like kind of a random image for a model to memorize, but it could be.

      There is definitely enough empirical validation that shows image models retain lots of original copies in their weights, despite how much AI boosters think otherwise. That said, it is often images that end up in the training set many times, and I would think it strange for this image to do that.

      Regardless, great find.

      • Nition 8 hours ago ago

        I feel it's too much of a perfect match to be generated from the model's memory. It's pixel perfect. Gotta be a mistake.

    • minimaxir 13 hours ago ago

      Given the recency of that image, it is unlikely it is in the training data and therefore I would go with oversight.

  • JimsonYang 10 hours ago ago

    > you can make your own mangas

    No you can’t.

    You still have the studio ghibili look from the video. The issue of generating manga was the quality of characters, there’s multiple software to place your frame.

    But I am hopeful. If I put in a single frame, can it carry over that style for the next images? It would be game changing if a chat could have its own art style

  • thelucent 13 hours ago ago

    It seems to still have this gpt image color that you can just feel. The slight sepia and softness.

    • honzaik 12 hours ago ago

      I was just wondering about that. Did they embrace it as a “signature look”? it cant be accidental, right?

      • GaryBluto 10 hours ago ago

        It's definitely not accidental but I'm not completely sure whether or not it is simply a "tell" or watermark or an attempt to foster brand association.

      • dymk 9 hours ago ago

        It's the Stranger Things nostalgia filter. Almost all the sample pictures they had looked like they were vaguely from the 90s-00s era.

  • tezza 2 hours ago ago

    I've rushed out my standardised quality check images for gpt-image-2:

    https://generative-ai.review/2026/04/rush-openai-gpt-image-2...

    I've done a series over all the OpenAI models.

    gpt-image-2 has a lot more action, especially in the Apple Cart images.

  • Oras 11 hours ago ago

    My test for image models is asking it to create an image showing chess openings. Both this model and Banana pro are so bad at it.

    While the image looks nice, the actual details are always wrong, such as showing pawns in wrong locations, missing pawns, .. etc.

    Try it yourself with this prompt: Create a poster to show opening game for Queen's Gambit to teach kids to play chess.

    • lxgr 11 hours ago ago

      It almost nailed it for me (two squares have both white and black color). All pieces and the position look correct.

    • tempaccount5050 11 hours ago ago

      What move? Who's turn is it? Declined or accepted? Garbage in, garbage out.

      • bogtap82 11 hours ago ago

        In some cases I would agree with this, but image model releases including this one are beginning to incorporate and market the thinking step. It is not a reach at this point to expect the model to take liberties in order to deliver a faithful and accurate representation of your request. A model could still be accurate while navigating your lack of specificity.

      • timacles 9 hours ago ago

        Kasparov vs Karpov ‘87 Olympiad. Move 6

      • dudul 10 hours ago ago

        What do you mean? Parent clearly describes the Queen's Gambit. 1.d4 d5 2.c4 There is no room for ambiguity here.

        • kuboble 7 hours ago ago

          King Indian Defense would be a better prompt as Queen's Gambit can now refer to e.g. some scene from Netflix series.

  • BohdanPetryshyn 23 minutes ago ago

    Am I the only one for whom videos in OpenAI releases never load? Tried both Chrome and Safari

  • RigelKentaurus 13 hours ago ago

    If every single image on their blog was generated by Images 2.0 (I've no reason to believe that's not the case), then wow, I'm seriously impressed. The fidelity to text, the photorealism, the ability to show the same character in a variety of situations (e.g. the manga art) -- it's all great!

  • platinumrad 10 hours ago ago

    Why do all of the cartoons still look like that? Genuinely asking.

    • orthoxerox 40 minutes ago ago

      That was my reaction as well. Either they have decided than LLMs have this "house style" for stylized 2D art and we should deal with it, or no amount of prompting can get rid of it.

  • jumploops 5 hours ago ago

    Looks like analog clocks work well enough now, however it still struggles with left-handed people.

    Overall, quite impressed with its continuity and agentic (i.e. research) features.

  • modeless 11 hours ago ago

    Can it generate transparent PNGs yet?

    • alasano 11 hours ago ago

      Previous gpt image models could (when generating, not editing) but gpt-image-2 can't.

      Noticed it earlier while updating my playground to support it

      https://github.com/alasano/gpt-image-playground

      • lxgr 11 hours ago ago

        Works for me, but really weirdly on iOS: Copying to clipboard somehow seems to break transparency; saving to the iOS gallery does not. (And I’ve made sure to not accidentally depend on iOS’s background segmentation.)

    • vunderba 9 hours ago ago

      OpenAI’s API docs are frustratingly unclear on this. From my experience, you can definitely generate true transparent PNG files through the ChatGPT interface, including with the new GPT-Image-2 model, but I haven’t found any definitive way to do the same thing via the API.

  • dazhbog 11 hours ago ago

    Yay, let's burn the planet computing more slopium..

  • naseemali925 4 hours ago ago

    Its amazingly good at creating UI mockups. Been trying this to create UI mockups for ideas.

  • mvkel 7 hours ago ago

    I wonder if this confirms version 1 of some kind of "world model."

    It has an unprecedented ability to generate the real thing (for example, a working barcode for a real book)

  • vunderba 8 hours ago ago

    I decided to run gpt-image-2 on some of the custom comics I’ve come up with over the years to see how well it would do, since some of them are pretty unusual. Overall, I was quite impressed with how faithful it adhered to the prompts given that multi-panel stuff has to maintain a sense of continuity.

    Was surprised to see it be able to render a decent comic illustrating an unemployed Pac-Man forced to find work as a glorified pie chart in a boardroom of ghosts.

    https://mordenstar.com/other/gpt-2-comics

  • franze 9 hours ago ago

    the tragedy of image generating ai is that it is used to massively create what already exists instead of creating something truly unique - we need ai artists - and yeah, they will not be appreciated

    • franze 9 hours ago ago

      so yeah a smart move of openai would be to sponsor artists - provokant ones, junior ones, with nothing to lose - but that cell in the spreadsheet will be too small to register and will prop. never happen

  • StefanBatory an hour ago ago

    Do you think those working at ChatGPT have ever wondered how they are contributing to dismantling democracy and ensuring nothing is true by now? The ultimate technological postmodernism.

    • wiseowise an hour ago ago

      They’re too busy counting cash. Most of them are what? 30 something to 50? By the time democracy is dismantled they’ll be living in their protected mansions.

  • etothet 11 hours ago ago

    I would love to see prompt examples that created the images on the announcement page.

    • DauntingPear7 10 hours ago ago

      You can by changing the view before the gallery

  • jcattle 2 hours ago ago

    Can we talk about how jarring the announcement video is?

    AI generated voice over, likely AI generated script (You see, this model isn't just generating images, it's thinking!). From what it looks like only the editing has some human touch to it?

    It does this Apple style announcement which everyone is doing, but through the use of AI, at least for me, it falls right into the uncanny valley.

  • james2doyle 10 hours ago ago

    In the next round of ChatGPT advertisements, if they don’t use AI generated images, then that means they don’t believe in their own product right?

  • muyuu 11 hours ago ago

    I wonder if this will be decent at creating sprite frame animations. So far I've had very poor results and I've had to do the unthinkable and toil it out manually.

    • vunderba 9 hours ago ago

      I created this little demo of an animated sprite sheet using generative AI. It's not great, but it is passable.

      https://mordenstar.com/other/hobbes-animation/

      • muyuu 9 hours ago ago

        Looks good to me. Would be nice to see the process. I'm having trouble with parts of the stride when the far leg is ahead. Doing 8-directional isometric right now.

    • freedomben 11 hours ago ago

      I had exactly the same thought! I've got a game I've been wanting to build for over a decade that I recently started working on. The art is going to be very challenging however, because I lack a lot of those skills. I am really hoping the AI tools can help with that.

      Is anyone doing this already who can share information on what the best models are?

      • gizmodo59 11 hours ago ago

        Use the imagegen skill in codex and ask it to create sprites. It works really well.

        • muyuu 9 hours ago ago

          I didn't have great success last i tried, but i will give it another shot this week. Presumably they incorporated improvements to the skill?

        • freedomben 10 hours ago ago

          Thank you!

    • ZeWaka 10 hours ago ago

      It's still bad.

  • fizlebit 4 hours ago ago

    Scrolling through those images it just feels like intellectual theft on a massive scale. The only place I think you're going to get genuinely new ideas is from humans. Whether those humans use AI or not I don't care, but the repetitive slop of AI copying the creative output of humans I don't find that interesting. Call me a curmudgeon. I guess humans also create a lot of derivative slop even without AI assistance. If this leads somehow to nicer looking user interfaces and architecture maybe that is good thing. There are a lot of ugly websites, buildings and products.

  • lifeisstillgood 10 hours ago ago

    Pretty much all of the kerfuffle over AI would go away of it was accurately priced.

    After 2008 and 2020 vast (10s of trillions) amounts of money has been printed (reasonably) by western gov and not eliminated from the money supply. So there are vast sums swilling about - and funding things like using massively Computationally intensive work to help me pick a recipie for tonight.

    Google and Facebook had online advertising sewn up - but AI is waaay better at answering my queries. So OpenAI wants some of that - but the cost per query must be orders of magnitude larger

    So charge me, or my advertisers the correct amount. Charge me the right amount to design my logo or print an amusing cat photo.

    Charge me the right cost for the AI slop on YouTube

    Charge the right amount - and watch as people just realise it ain’t worth it 95% of the time.

    Great technology - but price matters in an economy.

  • kanodiaayush 11 hours ago ago

    It stands out to me that this page itself is wonderful to go through (the telling of the product through model generated images).

  • cyberjunkie 5 hours ago ago

    Looks like AI and I look away from any image generated by a LLM. It's my easy internal filter to weed out everything that isn't art.

  • tomchui157 2 hours ago ago

    Img2+ seed dance 2 = image AGI

  • dakiol 11 hours ago ago

    > On the flip side, there are hundreds of ways that these tools cause genuine harm, not just to individuals but to entire systems.

    Yeah, agree. I think it's the first time I'm asking myself: Ok, so this new cool tech, what is it good for? Like, in terms of art, it's discarded (art is about humans), in terms of assets: sure, but people is getting tired of AI-generated images (and even if we cannot tell if an image is AI-generated, we can know if companies are using AI to generate images in general, so the appealing is decreasing). Ads? C'mon that's depressing.

    What else? In general, I think people are starting to realize that things generated without effort are not worth spending time with (e.g., no one is going to read your 30-pages draft generated by AI; no one is going to review your 500 files changes PR generated by AI; no one is going to be impressed by the images you generate by AI; same goes for music and everything). I think we are gonna see a Renaissance of "human-generated" sooner rather than later. I see it already at work (colleagues writing in slack "I swear the next message is not AI generated" and the like)

    • lucaslazarus 11 hours ago ago

      > I think it's the first time I'm asking myself: Ok, so this new cool tech, what is it good for?

      I feel like this is something people in the industry should be thinking about a lot, all the time. Too many social ills today are downstream of the 2000s culture of mainstream absolute technoöptimism.

      Vide. Kranzberg's first law--“Technology is neither good nor bad; nor is it neutral.”

      • runarberg 11 hours ago ago

        Completely unrelated, but I am curious about your keyboard layout since you mistyped ö instead of - these two symbols are side by side in the Icelandic layout, and the ö is where - in the English (US) layout. As such this is a common type-o for people who regularly switch between the Icelandic and the English (US) layout (source: I am that person). I am curious whether more layouts where that could be common.

        • bulletsvshumans 10 hours ago ago

          This is also a stylistic choice that the New Yorker magazine uses for words with double vowels where you pronounce each one separately, like coöperate, reëlect, preëminent, and naïve. So possibly intentional.

          • lucaslazarus 10 hours ago ago

            Yes, this is exactly correct, and I will die on this hill. Additionally, I don't like the way a hyphenated "techno-optimism" looks and "technOOPtimism" is a bit too on-the-nose.

          • runarberg 10 hours ago ago

            That makes sense[1] but it prompts the obvious question: does this style write it as typeö then?

            1: Though personally I hate it, I just cannot not read those as completely different vowels (in particular ï → [i:] or the ee in need; ë → [je:] or the first e here; and ö → [ø] or the e in her)

            • lucaslazarus 8 hours ago ago

              No. Firstly because it is spelled “typo.” Secondly you typically use the diaeresis to tell the reader to not confuse it with a similarly spelled sound or diphthong. So it tells a reader that “reëlect” is not pronounced REEL-ect, “coöperate” is not COOP-uh-ray-t, and “naïve” is not NAY-v.

              • losvedir 8 hours ago ago

                Because written English makes so much sense normally. God forbid someone has to figure out the ambiguous pronunciation of those particular words. It seems like a silly thing to provide extra guidance on to me.

        • heisenzombie 10 hours ago ago

          I suspect the diaresis was intentional, in “New Yorker” style.

          https://www.arrantpedantry.com/2020/03/24/umlauts-diaereses-...

    • lxgr 11 hours ago ago

      I can’t design wallpapers/stickers/icons/…, but I can describe what I want to an image generation model verbally or with a source photo, and the new ones yield pretty good results.

      For icons in particular, this opens up a completely new way of customizing my home screen and shortcuts.

      Not necessary for the survival of society, maybe, but I enjoy this new capability.

      • latexr 10 hours ago ago

        So we get a fresh new cheap way to spread propaganda and lies and erode trust all across society while cementing power and control for a few at the top, and in return get a few measly icons (as if there weren’t literally thousands of them freely available already) and silly images for momentaneous amusement?

        What a rotten exchange.

        • SamuelAdams 10 hours ago ago

          I wonder what will happen to the entire legal system. It used to be fairly difficult to create convincing photos and videos.

          AI can probably fool most court judges now. Or the defense can refute legitimate evidence by saying “it’s AI / false”. How would that be refuted?

          • jll29 9 hours ago ago

            Yes, that is a major worry of mine, too. CCTV evidence is worth nil now (could be generated in whole or part), and even eye-witness testimony can be trusted (sure, a witness may think they saw the alleged perpetrator, but perhaps they just saw an AI-generated video/projection of someone).

          • BLKNSLVR 9 hours ago ago

            MS13 was literally tattooed on his knuckles!

          • Gigachad 9 hours ago ago

            Multiple data sources, considering the trustworthiness of the source of the information, and accountability for lying.

            You might generate an AI video of me committing a crime, But the CCTV on the street didn't show it happening and my phone cell tower logs show I was at home. For the legal system I don't think this is going to be the biggest problem. It's going to be social media that is hit hardest when a fake video can go viral far faster than fact checking can keep up.

          • idiotsecant 10 hours ago ago

            By having people also testify to authenticity and coming down like the hand of God on fakers, the same way we make sure evidence is real now.

          • gedy 9 hours ago ago

            If it means anything, I have a 1990 Almanac from an old encyclopedia that warns the exact same thing about digital photo manipulation. I don't think it really matters at this point

        • jll29 9 hours ago ago

          AI can also be used to fight propaganda, for instance BiasScanner makes you aware of potentially manipulative news: https://biasscanner.org .

          So that makes AI a "dual good", like a kitchen knife: you can cut your tomato or kill you neighbor with it, entirely up to the "user". Not all users are good, so we'll see an intense amplification of both good and bad.

          • jrumbut 9 hours ago ago

            AI is certainly a dual good but I think the project is misguided at best.

            I put in one of the driest descriptions of the Holocaust I could find and it got a very high score for bias, calling a factual description of a massacre emotional sensationalism because it inevitably contains a lot of loaded words.

            It also doesn't differentiate between reporting, commentary, poetry, or anything else. It takes text and spits out a number, which is a very shallow analysis.

          • dymk 9 hours ago ago

            It's more work to fight bullshit than it is to generate it, though. Saying "Use AI to fight it" is inherently a losing strategy when the other side also has an AI that is just as powerful.

            • jrumbut 9 hours ago ago

              And no amount of BS detecting tells you what is true. The challenge that I see a lot of people have is they really don't have a framework to incorporate new information into.

              They're adrift, every new "fact" (whether true or false) blows them in a new direction. Often they get led in terrible directions from statements that are entirely true (but missing important context).

              A lot of financial cons work that way, a long string of true statements that seem to lead to a particular conclusion. I know that if someone is offering me 20% APY there will usually be some risk or fee that offsets those market-beating gains (it may be a worthwhile risk or a well earned fee, but that number needs to trigger further investigation).

              We need people to be equipped with that sort of framework in as many areas as possible, but we seem to be moving backwards in that area.

        • thesmtsolver2 10 hours ago ago

          Don’t blame the tools. Stalin, Mao and Hitler didn’t need AI.

          • latexr 34 minutes ago ago

            That pro forma response grows oh so very tiresome.

            For the nth time, scale, easiness, and access, matter. AI puts propaganda abilities far beyond the reach of those men in the hands of many more people. Or do you not understand the difference between one man with a revolver and an army with machine guns? They are not the same.

            Nowhere in my comment am I “blaming the tools”. I’ll ask you engage with the argument honestly instead of simply parroting what you already believe absent reading.

      • camillomiller 11 hours ago ago

        Is that worth the cost of this technology? Both in terms of financial shenanigans and its environmental cost?

        • subroutine 10 hours ago ago

          Are you asking if the 10 seconds it takes AI to generate an image is more costly to the environment than a commissioned graphics artist using a laptop for 5-6 hours, or a painter who uses physical media sourced from all over the world?

          • bayindirh 10 hours ago ago

            In short, yes.

            A modern laptop is running almost fanless, like a 486 from the days of yore.

            A single H200 pumps out 700W continuously in a data center, and you run thousands of them.

            Also, don't forget the training and fine tuning runs required for the models.

            Mass transportation / global logistics can be very efficient and cheap.

            Before the pandemic, it was cheaper to import fresh tomatoes from half-world away rather than growing them locally in some cases. A single container of painting supplies is nothing in the grand scheme of things, esp. when compared with what data centers are consuming and emitting.

            • lxgr 2 hours ago ago

              This argument is so flawed that it almost loops back around to being correct again:

              No, in terms of unit economics, I'm almost certain that the painting supplies have a bigger ecological/resource footprint than an LLM per icon generated, and I'm pretty sure the cost of shipping tomatoes does not decrease that footprint, even if it possibly dwarfs it.

              But yes, due to Jevon's paradox, the total resource use might well increase despite all that. I, for example, would have never commissioned a professional icon for my silly little iOS shortcuts on my homescreen, so my silly icon related carbon footprint went from exactly zero to slightly above that.

            • ToValueFunfetti 9 hours ago ago

              This is a plainly dishonest comparison. A single H200 does not need to run continuously for you to generate a dozen pictures. And then you immediately pivot to comparing the paint usage against "the grand scheme of things"- 700W is nothing in the grand scheme of things.

              • bayindirh 2 hours ago ago

                In fact it's pretty fair.

                Many people think that when a piece of hardware is idle, its power consumption becomes irrelevant, and that's true for home appliances and personal computers.

                However, the picture is pretty different for datacenter hardware.

                Looking now, an idle V100 (I don't have an idle H200 at hand) uses 40 watts, at minimum. That's more than TDP of many, modern consumer laptops and systems. A MacBook Air uses 35W power supply to charge itself, and it charges pretty quickly even if it's under relatively high stress.

                I want to clarify some more things. A modern GPU server houses 4-8 high end GPUs. This means 3KW to 5KW of maximum energy consumption per server. A single rack goes well around 75KW-100KW, and you house hundreds of these racks. So, we're talking about megawatts of energy consumption. CERN's main power line on the Swiss side had a capacity around 10MW, to put things in perspective.

                Let's assume an H200 uses 60W energy when it's idle. This means ~500W of wasted energy per server for sitting around. If a complete rack is idle, it's 10KW. So you're wasting energy consumption of 3-5 houses just by sitting and doing nothing.

                This computation only thinks about the GPU. Server hardware also adds around 40% to these numbers. Go figure. This is wasting a lot for cat pictures.

                And, these "small" numbers add up to a lot.

                • lxgr 2 hours ago ago

                  Definitely worth considering in a world in which there are any H200s idling in data centers.

                  • bayindirh 2 hours ago ago

                    Now that's one fine No True Scotsman.

                        A: GPUs use a lot of power!
                        B: Not all of them are running 100% continuously, eh?,
                        A: They waste too much power when they're idle, too!
                        C: None of the H200s are sitting idle, you knob!
                    
                    I mean, they are either wasting energy sitting idle or doing barely useful work. I don't know what to say anymore.

                    We'll cook ourselves, anyway. Why bother? Enjoy the sauna. ¯\_(ツ)_/¯

                    • lxgr an hour ago ago

                      I'm not saying that this isn't "true idling", I'm saying that idling H200s simply don't exist, i.e., I disagree with B. Do you, A, even disagree?

                      > they are either wasting energy sitting idle or doing barely useful work

                      Now here's a true (inverse) scotsman, or more accurately, a moved goalpost: Work on things you don't deem valuable is basically the same thing as idling?

                      > We'll cook ourselves, anyway. Why bother? Enjoy the sauna. ¯\_(ツ)_/¯

                      I'm very concerned about that too, but I don't think we'll avoid the sauna with fatalism or logically unsound appeals to morality about resource consumption.

            • cpill 9 hours ago ago

              these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).

              so if power were plentiful and environmental you'd be onboard with it?

              • bayindirh 2 hours ago ago

                > these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).

                Please see my other comment about energy consumption and connect the dots with how open loop DLC systems are harmful to fresh water supplies (which is another comment of mine).

                > so if power were plentiful and environmental you'd be onboard with it?

                This is a pretty loaded way to ask this. Let me put this straight. I'm not against AI. I'm against how this thing is built. Namely:

                    - Use of copyrighted and copylefted materials to train models and hiding under "fair use" to exploit people.
                      - Moreover, belittling of people who create things with their blood sweat and tears and poorly imitating their art just for kicks or quick bucks.
                    - Playing fast and loose with environment and energy consumption without trying to make things efficiently and sustainably to reduce initial costs and time to market.
                    - Gaslighting the users and general community about how these things are built, and how it's a theater, again to make people use this and offload their thinking, atrophying their skills and making them dependent on these.
                
                I work in HPC. I support AI workloads and projects, but the projects we tackle have real benefits, like ecosystem monitoring, long term climate science, water level warning and prediction systems, etc. which have real tangible benefits for the future of the humanity. Moreover, there are other projects trying to minimize environmental impact of computation which we're part of.

                So it's pretty nuanced, and the AI iceberg goes well below OpenAI/Anthropic/Mistral trio.

                • lxgr 2 hours ago ago

                  > I support AI workloads and projects, but the projects we tackle have real benefits [...]

                  As opposed to the illusory/fake/immoral benefits of using LLMs for entertainment purposes (leaving aside all other applications for now)?

                  How do you feel about Hollywood, or even your local theater production? I bet the environmental unit economics don't look great on those either, yet I wouldn't be so quick to pass moral judgement.

                  Why not just focus on the environmental impact instead of moralizing about the utility? It seems hard to impossible to get consensus there, and the impact should be able to speak for itself if it's concerning.

          • dilDDoS 10 hours ago ago

            Cheaper/faster tech increases overall consumption though. Without the friction of commissioning a graphics artist to design something, a user can generate thousands of images (and iterate on those images multiple times to achieve what they want), resulting in way more images overall.

            I'm not really well versed on the environmental cost, more just (neutrally) pointing out that comparing a single 10s image to a 5-6 hour commission ignores the fact that the majority of these images probably would never have existed in the first place without AI.

            • runarberg 10 hours ago ago

              Also, ignoring training when talking about the environmental costs is bad faith. Without training this image would not exist, and if nobody generating images like these, the training would not happen. So we should really ask, the 10 seconds it took for inference, plus the weeks or months of high intensity compute it took to train the model.

              • ToValueFunfetti 8 hours ago ago

                You'd want to compare against the fraction of training attributable to the image

          • camillomiller 3 hours ago ago

            Wow, do you hold a degree in false dichotomies?

        • Legend2440 10 hours ago ago

          The environmental cost is significantly overblown, especially water usage.

          • bayindirh 10 hours ago ago

            I work with direct liquid cooled systems. If the datacenter is working with open DLC systems (most AI datacenters in the US in fact do), there's a lot of water is being wasted, 7/24/365.

            A mid-tier top-500 system (think about #250-#325) consumes about a 0.75MW of energy. AI data centers consume magnitudes more. To cool that behemoth you need to pump tons of water per minute in the inner loop.

            Outer loop might be slower, but it's a lot of heated water at the end of the day.

            To prevent water wastage, you can go closed loop (for both inner and outer loops), but you can't escape the heat you generate and pump to the atmosphere.

            So, the environmental cost is overblown, as in Chernobyl or fallout from a nuclear bomb is overblown.

            So, it's not.

            • Legend2440 10 hours ago ago

              It's not that it doesn't use water; it's that water is not scarce unless you live in a desert.

              As a country, we use 322 billion gallons of water per day. A few million gallons for a datacenter is nothing.

              • bayindirh 10 hours ago ago

                The problem is you don't just use that water and give it back.

                The water gets contaminated and heated, making it unsuitable for organisms to live in, or to be processed and used again.

                In short, when you pump back that water to the river, you're both poisoning and cooking the river at the same time, destroying the ecosystem at the same time too.

                Talk about multi-threaded destruction.

                • Legend2440 10 hours ago ago

                  No, you're making that up. Datacenters do not poison rivers.

                  • bayindirh 10 hours ago ago

                    To reiterate, I work in a closed loop DLC datacenter.

                    Pipes rust, you can't stop that. That rust seeps to the water. That's inevitable. Moreover, if moss or other stuff starts to take over your pipes, you may need to inject chemicals to your outer loop to clean them.

                    Inner loops already use biocides and other chemicals to keep them clean.

                    Look how nuclear power plants fight with organism contamination in their outer cooling loops where they circulate lake/river water.

                    Same thing.

                    • camillomiller 3 hours ago ago

                      Dude you can’t fight Dunning Krueger. They all think they’re experts in everything now.

              • jll29 9 hours ago ago

                Just because some countries waste a lot at present time does not mean it's available as a resource indefinitely.

        • vrc 11 hours ago ago

          Depends on if you believe it will ever become cheaper. Either hardware, inspiring more efficient smaller models, or energy itself. The techno optimist believes that that is the inevitable and investable future. But on what horizon and will it get “zip drived” before then?

        • 3dsnano 10 hours ago ago

          absolutely without a doubt it is

          • bayindirh 10 hours ago ago

            If that energy is used for research, maybe. If used to answer customer questions or generate Studio Ghibli knock-offs, it's not worth it, even a bit.

            • 3dsnano 9 hours ago ago

              what’s the difference between those two? how can you say one has more value than the other?

              • bayindirh 2 hours ago ago

                One is trying to save the future of the planet and the humanity with science, the other one is mocking a man who devoted his whole life to his art, even if it means spending years to perfect a three-second sequence for kicks and monies.

                If you see no difference between them, I can't continue to discuss this with you, sorry.

            • lxgr an hour ago ago

              To you. Fortunately nobody elected you chief resource allocator of the planet.

              And I say that as somebody that also finds Ghibli knock-off avatars used by AI bros in incredibly bad taste (or, arguably an even worse crime against taste, a dated 2025 vibe).

              • bayindirh an hour ago ago

                Thanks for your personal jab. Another nice comment to frame and hang to my wall.

                I like your discussion style.

                • lxgr 32 minutes ago ago

                  Passing moral judgement about other people's value preferences seems pretty preposterous to me as well, so I was being a bit glib, but to be clear:

                  I don't want to live in a world in which people get to decide what others can and can't do with their share of resources (after properly accounting for all externalities, including pollution, the potential future value of non-renewable present resources etc. – this is where today's reality often and massively misses that ideal) based on their subjective moral criteria.

    • Gigachad 11 hours ago ago

      This is where I’m at. If you can’t be bothered to write/make it, why would I be bothered to read or review it?

      • tempaccount5050 11 hours ago ago

        Because I'm not an artist and can't afford to pay one for whatever business I have? This idea that only experts are allowed to do things is just crazy to me. A band poster doesn't have to be a labor of love artisanal thing. Were you mad when people made band posters with MS word instead of hiring a fucking typesetter? I just don't get it.

        • overgard 11 hours ago ago

          I dunno, I have some band posters that are pretty cool pieces of art that obviously had a lot of thought put into them (pre-AI era stuff). I don't think I'd hang up an AI generated band poster, even if it was cool; I'd feel weird and tacky about it.

          • runarberg 10 hours ago ago

            I was hosting a Karaoke event in my town and really went out of my way to ensure my promotional poster looked nothing like AI. I really really really did not want my townfolks thinking I would use AI to design a poster.

            My design rules were: No gradients; no purple; prefer muted colors; plenty of sharp corners and overlapping shapes; Use the Boba Milky font face;

            • dpark 10 hours ago ago
              • runarberg 9 hours ago ago

                I mean: https://imgur.com/a/BYikxEI

                The difference is very stark:

                - The AI has a hard time making the geometric shapes regular. You see the stars have different size arms at different intervals in the AI version. This will take a human artist longer time to make it look worse.

                - The 5-point stars are still a little rounded in the AI version.

                - There is way too much text in the AI version (a human designer might make that mistake, but it is very typical of AI).

                - The orange 10 point star in the right with the text “you are the star” still has a gradient (AI really can’t help it self).

                - The borders around the title text “Karaoke night!” bleed into the borders of the orange (gradient) 10-point star on the right, but only half way. This is very sloppy, a human designer would fix that.

                - The font face is not Milky Boba but some sort of an AI hybrid of Milky Boba, Boba Milky and comic sans.

                - And finally, the QR code has obvious AI artifacts in them.

                Point I’m making, it is very hard to prompt your way out of making a poster look like AI, especially when the design is intentional in making it not look like AI.

                • dpark 9 hours ago ago

                  I hear what you’re saying and at the same time I don’t agree with some of your criticisms. The gradient, yep, it slipped one in. The imperfect stars? I have seen artists do this forever, presumably intentional flair. The few real “glitches” would be trivial to fix in Photoshop.

                  But they are very different certainly. ChatGPT generated a poster with a very sleek, “produced” style that apes corporate posters whereas you went with a much more personal touch. You are correct that yours does not look like typical AI.

                  My point is certainly not that the AI poster is better, only that it’s capable of producing surprising results. With minimal guidance it can also generate different styles: https://imgur.com/a/zXfOZaf

                  I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better. A year or two ago the poster would have been full of nonsense letters.

                  > And finally, the QR code has obvious AI artifacts in them.

                  I wonder if this is intentional, to prevent AI from regurgitating someone’s real QR codes.

                  ETA: Actually, I wonder how much of the “flair” on human-drawn stars is to avoid looking like they are drag-and-drop from a program like Word. Ironic if we’ve circled back around to stars that look perfect to avoid looking like a different computer generated star.

                  • twobitshifter 8 hours ago ago

                    > I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better.

                    What’s the mechanism that makes an AI ‘better’ at looking non-AI? Training on non-ai trend images? It’s not following prompts more closely. Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.

                    To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art. To be non-AI, art needs to counter all averages and trends that the models are trained on.

                    • dpark 7 hours ago ago

                      > What’s the mechanism that makes an AI ‘better’ at looking non-AI?

                      I don’t know. Better training data? More training data? The difference over the past year or two is stark so something is improving it.

                      > Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.

                      The fact that humans are actively trying to make art that does not look like AI makes it clear that AI is not so obvious as many would like to pretend. If it were obvious, no one would need to try to avoid their art looking like AI.

                      > To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art.

                      Obviously.

                      > To be non-AI, art needs to counter all averages and trends that the models are trained on.

                      So in order to not look like AI, art just has to be so unique that it’s unlike any training data. That’s a high bar. Tough time to be an artist.

                  • runarberg 6 hours ago ago

                    My point is not that the AI version looks bad (although it does) it is that I hate AI, and so do many people around me. And I hate AI so much, and I know so many people around me hate AI as much, that I am consciously altering my designs such to be as far away from AI as I can. This is the moving from Seattle to Florida after a divorce of creative design.

                    About the stars. I know designers paint unperfect stars. I even did that in my design. In particular I stretched it and rotated slightly. A more ambitious designer might go further and drag a couple of vertices around to exaggerate them relative to the others. But usually there is some balance in their decisions. AI however just puts the vertices wherever, and it is ugly and unbalanced. A regular geometric shape with a couple of oddities is a normal design choice, but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that.

                    • dpark 5 hours ago ago

                      > I am consciously altering my designs such to be as far away from AI as I can

                      I don’t think this is a productive choice, but it’s certainly yours to make.

                      > but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that

                      I find this such an odd thing to say. It’s way easier to draw a wonky star than a symmetrical one. Unless “drawing” here means using a mouse to drag and drop a star that a program draws for you.

                      Vintage illustrations are full of nonsymmetrical shapes. The classic Batman “POW” and similar were hand drawn and rarely close to symmetrical.

                      • runarberg 5 hours ago ago

                        I draw mine in Inkscape (because I like open source more then my sanity) and inkscape has special tools to draw regular geometric shapes. You don‘t need to use those tools, you can use the free draw pen, or the bezier curve tool, or even hand code the <path d="M43,32l5.34-2.43l3.54-0.53" />, etc. But using these other tools is suboptimal compared to the regular geometric tool.

                        Apart from me, my partner also does graphic design, and unlike me she values her sanity more then open source so she uses illustrator for her designs. In adobe’s walled garden world of proprietary software it is still the same story, you generally use the specific tools to get regular shapes (or patterns) and then alter them after the they are drawn. You don‘t draw them from scratch. If you are familiar with modular analog synthesizers, this is starting with a square wave, and then subtracting to modulate the signal into a more natural sounding form.

        • AkBKukU 11 hours ago ago

          > can't afford to pay one for whatever business I have

          At small scales what "art" does your business need? If you can't afford to hire an artist (which is completely fine, I couldn't for my business!) do you really need the art or are you trying to make your "brand" look more polished than it actually is? Leverage your small scale while you can because there isn't as much of an expectation for polish.

          And no, a band poster doesn't have to be a labor of love. But it also doesn't have to be some big showy art either. If I saw a small band with a clearly AI generated poster it would make me question the sources for their music as well.

        • squidsoup 10 hours ago ago

          > band poster doesn't have to be a labor of love artisanal thing

          Very few bands would agree with that statement.

        • Arch485 10 hours ago ago

          I think you're misunderstanding - most people's beef with AI art isn't that it "isn't made by experts", it's that

          1) it's made from copyrighted works, and the original authors receive no credit; 2) it is (typically) low-effort; 3) there are numerous negative environmental effects of the AI industry in general; 4) there are numerous negative social effects of AI in general, and more specifically AI generated imagery is used a lot for spreading misinformation; 5) there are numerous negative economic effects of AI, and specifically with art, it means real human artists are being replaced by AI slop, which is of significantly lower quality than the equivalent human output. Also, instead of supporting multiple different artists, you're siphoning your money to a few billion dollar companies (this is terrible for the economy)

          As a side note, if you have a business which truly cannot afford to pay any artists, there are a lot of cheaper, (sometimes free!) pre-paid art bundles that are much less morally dubious than AI. Plus, then you're not siphoning all of your cash to tech oligarchs.

        • Peritract 7 hours ago ago

          No one is saying that only experts can do things; that's a totally inaccurate reading of the argument and the post.

          People are saying, very clearly, that they're not willing to put effort into something produced by someone who put no effort in.

        • jll29 9 hours ago ago

          What, a music band's poster, 'typeset' in Microsoft Word? I cannot imagine bothering to go to such a band's concert.

          <joke>What's your rock band called, "SEC Form 10-K"?</joke>

        • swader999 11 hours ago ago

          I agree and whose to say your life experience isn't as valid as someone with less years but more time at just the traditional tools? I'd think either extreme could produce real art if the tools moat was reduced with AI.

        • Gigachad 10 hours ago ago

          I actually love MS word posters. It's a million times more authentic and enjoyable than a slop generation. If a band put up an AI poster I'd assume they lack any kind of taste which is the whole reason I'd want to listen to a band anyway.

          I know this is controversial in tech spaces. But most people, particularly those in art spaces like music actually appreciate creativity, taste, effort, and personal connection. Not just ruthless efficiency creating a poster for the lowest cost and fastest time possible.

        • reaperducer 10 hours ago ago

          Because I'm not an artist and can't afford to pay one for whatever business I have?

          If your business can't afford to spend $5 on Fivr, it's not a business. It's not even panhandling.

          • tempaccount5050 7 hours ago ago

            Why is that better? They're going to use AI anyway. It's fiver.

        • Jtarii 9 hours ago ago

          I would rather see a MS word poster than be lied to.

        • satisfice 10 hours ago ago

          How about going without? I can’t afford an artist, either, so I don’t have art. Don’t foist slop on people because you are trying to be something that you aren’t.

      • zulban 11 hours ago ago

        Nobody can be bothered to make my cat out of Lego and the size of mount Everest but if an AI did I'd sure love to see it.

        Your quip is pithy but meaningless.

        • Gigachad 11 hours ago ago

          I'm not saying it's worthless for yourself, it's worthless to me as a viewer. AI content is great for your own usage, but there is no point posting and distributing AI generation.

          I could have generated my own content, so just send the prompt rather than the output to save everyone time.

          • zulban 6 hours ago ago

            Maybe reread my comment. Would you not want to see a mount Everest sized Lego cat? Even if it were my cat?

            Again - your quip sounds good but when you think about it, it's flatly wrong.

            • Fraterkes 35 minutes ago ago

              This doesn't make sense, if I want to see a lego-cat slopimage I can just prompt a model myself (and have it be of my own cat). There's no reason for you to be involved in any part of that process, because the point of this stuff is that you are not doing anything.

          • dolebirchwood 10 hours ago ago

            And when the distilled knowledge/product is the result of multiple prompts, revisions, and reiterations? Shall we send all 30+ of those as well so as to reproduce each step along the way?

      • loudandskittish 10 hours ago ago

        Exactly how I feel. There is already more art, movies, music, books, video games and more made by human beings than I can experience in my lifetime. Why should I waste any time on content generated by the word guessing machine?

    • atleastoptimal 11 hours ago ago

      The issue is that the signalling makes sense when human generated work is better than AI generated. Soon AI generated work will be better across the board with the rare exception of stuff the top X% of humans put a lot of bespoke highly personalized effort into. Preferring human work will be luxury status-signalling just like it is for clothing, food, etc.

      • dilDDoS 10 hours ago ago

        I'm probably in a weird subgroup that isn't representative of the general public, but I've found myself preferring "rough" art/logos/images/etc, basically because it signals a human put time into it. Or maybe not preferring, but at least noticing it more than the generally highly refined/polished AI artwork that I've been seeing.

        • appplication 10 hours ago ago

          There’s no reason to think people broadly want “better” writing, images, whatever. Look at the indie game scene, it’s been booming for years despite simpler graphics, lower fidelity assets, etc. Same for retro music, slam poetry, local coffee shops, ugly farmers market produce, etc.

          There is a mass, bland appeal to “better” things but it’s not ubiquitously desired and there will always be people looking outside of that purely because “better” is entirely subjective and means nothing at all.

      • james2doyle 10 hours ago ago

        I think "better" is doing a lot of heavy lifting in this argument. Better how?

        Is an AI generated photo of your app/site going to be more accurate than a screenshot? Or is an AI generated image of your product going to convey the quality of it more than a photo would?

        I think Sora also showed that the novelty of generating just "content" is pretty fleeting.

        I would be interested to see if any of the next round of ChatGPT advertisements use AI generated images. Because if not, they don’t even believe in their own product.

      • masswerk 10 hours ago ago

        The issue being, it's not an expression of anything. Merely like a random sensation, maybe some readable intent, but generic in execution, which isn't about anything even corporate art should be about. Are we going to give up on art, altogether?

        Edit: One of the possible outcomes may be living in a world like in "Them" with glasses on. Since no expression has any meaning anymore, the message is just there being a signal of some kind. (Generic "BUY" + associated brand name in small print, etc.)

        • ragequittah 9 hours ago ago

          Can't the expression come from the person prompting the AI and sometimes taking hours inpainting or tweaking the prompt to try get the exact image / expression they had in their mind? A good use I've found is to be able to make scenes from a dream you had into an image. If that's not an expression of something then I'm not sure anything is.

          • masswerk 9 hours ago ago

            Notably, this process of struggle is meant to go away, to make room for instant satisfaction. This is really about some kind of expression consumerism. (And what will be lost along the way is meaning.)

            • ragequittah 8 hours ago ago

              I always find this argument to ring hollow. Maybe it's because I've been through it with too many technologies already. Digital photography took out the art of film photography. CGI took out the wonder of practical effects. Digital art takes out the important brush strokes of someone actually painting. The real answer always is the mediums can coexist and each will be good for expression in their own way.

              I'm not sure you immediately lose meaning if someone can make a highly personalized version of something easily. The % of completely meaningless video after YouTube and tiktok came about has skyrocketed. The amount of good stuff to watch has gone up as well though.

      • fwipsy 10 hours ago ago

        Only novel art is interesting. AI can't really do novel. It's a prediction algorithm; it imitates. You can add noise, but that mostly just makes it worse. It can be used to facilitate original stuff though.

        But so many people want to make art, and it's so cheap to distribute it, that art is already commoditized. If people prefer human-created art, satisfying that preference is practically free.

        • atleastoptimal 10 hours ago ago

          AI can be novel, there is nothing in the transformer architecture which prohibits novelty, it's just that structurally it much prefers pattern-matching.

          But the idea of novelty is a misnomer I think. Any random number generator can arbitrarily create a "novel" output that a human has never seen before. The issue is whether something is both novel and useful, which is hard for even humans to do consistently.

          • CooCooCaCha 10 hours ago ago

            Anthropic recently changed their take-home test specifically to be more “out-of-distribution” and therefore more resistant to AI so they can assess humans.

            I’m so tired of “there’s nothing preventing”, and “humans do that too”. Modern AI is just not there. It’s not like humans and has difficulties with adapting to novelty.

            Whether transformers can overcome that remains to be seen, but it is not a guarantee. We’ve been dealing with these same issues for decades and AI still struggles with them.

        • idiotsecant 10 hours ago ago

          There are lots of things that are novel to you without necessarily being novel to the universe.

      • paulddraper 11 hours ago ago

        "Artisanal art" as it were.

      • vinyl7 10 hours ago ago

        The goal of art isn't to be perfect or as realistic as possible. The goal of art is to express, and enjoy that unique expression.

      • davebren 9 hours ago ago

        > Preferring human work will be luxury status-signalling just like it is for clothing, food, etc.

        What? Those items are luxuries when made by humans because they are physical goods where every single item comes with a production and distribution cost.

    • strulovich 11 hours ago ago

      Here’s one example:

      I just recently used for image generation to design my balcony.

      It was a great way to see design ideas imagined in place and decide what to do.

      There are many cases people would hire an artist to illustrate an idea or early prototype. AI generated images make that something you can do by yourself or 10x faster than a few years ago.

      • dwd 10 hours ago ago

        Did the same for my front garden.

        Not withstanding a few code violations, it generated some good ideas we were then able to tweak. The main thing was we had no idea of what we wanted to do, but seeing a lot of possibilities overlaid over the existing non-garden got us going. We were then able to extend the theme to other parts of the yard.

    • tecoholic 10 hours ago ago

      100%. A picture is worth a thousand words only when it conveys something. I love to see the pictures from my family even when they are taken with no care to quality or composition but I would look at someone else’s (as in gallery/exhibitions) only when they are stunning and captured beautifully. The medium is only a channel to communicate.

      Also, this can’t be real. How many publications did they train this stuff on and why are there no acknowledgment even if to say - we partnered with xyz manga house to make our model smarter at manga? Like what’s wrong with this company?

    • _the_inflator 11 hours ago ago

      We need to flip the script. AI is trying to do marketing: add “illegal usage will lead to X” is a gateway to spark curiosity. There is this saying that censoring games for young adults makes sure that they will buy it like crazy by circumventing the restrictions because danger is cool.

      There is nothing that cannot harm. Knives, cars, alcohol, drugs. A society needs to balance risks and benefits. Word can be used to do harm, email, anything - it depends on intention and its type.

    • _the_inflator 11 hours ago ago

      I see your point but reconsider: we will and need to see. Time will tell and this is simply economics: useful? Yes, no.

      I started being totally indifferent after thinking about my spending habits to check for unnecessary stuff after watching world championships for niche sports. For some this is a calling for others waste. It is a numbers game then.

    • Havoc 9 hours ago ago

      >and even if we cannot tell if an image is AI-generated, we can know if companies are using AI to generate images in general, so the appealing is decreasing

      Is that true? Don't think I'd get tired of images that are as good as human made ones just because I know/suspect there may have been AI involved

    • youdots 10 hours ago ago

      The technically (in both senses) astonishing and amazing output is not far off from some of the qualities of real advertising: Staged, attention grabbing, artificially created, superficially demanded, commercially attractive qualities. These align, and lots of similarities in the functions and outcomes of these two spheres come to mind.

    • simonw 11 hours ago ago

      I think there's real value to be had in using this for diagrams.

      Visual explanations are useful, but most people don't have the talent and/or the time to produce them.

      This new model (and Nano Banana Pro before it) has tipped across the quality boundary where it actually can produce a visual explanation that moves beyond space-filling slop and helps people understand a concept.

      I've never used an AI-generated image in a presentation or document before, but I'm teetering on the edge of considering it now provided it genuinely elevates the material and helps explain a concept that otherwise wouldn't be clear.

      • mwcampbell 10 hours ago ago

        Are there any models that are specifically trained to produce diagrams as SVG? I'd much prefer that to diffusion-based raster image generation models for a few reasons:

        - The usual advantages of vector graphics: resolution-independence, zoom without jagged edges, etc.

        - As a consequence of the above, vector graphics (particularly SVG) can more easily be converted to useful tactile graphics for blind people.

        - Vector graphics can more practically be edited.

        • twobitshifter 8 hours ago ago

          You can get them to produce mermaid diagrams, but you can also generate these yourself from text.

      • resters 11 hours ago ago

        This is the key point. In my view it's just like anything else, if AI can help humans create better work, it's a good thing.

        I think what we'll find is that visual design is no longer as much of a moat for expressing concepts, branding, etc. In a way, AI-generated design opens the door for more competition on merits, not just those who can afford the top tier design firm.

      • lol_me 11 hours ago ago

        yeah I'm not sure I'm in agreement that we can hand-wave assets and ads as entire classes of valuable content

    • swader999 11 hours ago ago

      I tend to share your same view. But is there really a line like you describe? Maybe AI just needs to get a few iterations better and we'll all love what it generates. And how's it really any different than any Photoshop computer output from the past?

    • JumpCrisscross 10 hours ago ago

      > What else?

      I used to have an assistant make little index-card sized agendas for gettogethers when folks were in town or I was organising a holiday or offsite. They used to be physical; now it's a cute thing I can text around so everyone knows when they should be up by (and by when, if they've slept in, they can go back to bed). AI has been good at making these. They don't need to be works of art, just cute and silly and maybe embedded with an inside joke.

      • pesus 9 hours ago ago

        I'm not seeing how it takes more than 5 minutes to type up an itinerary. If you want to make it cute and silly, just change up the font and color and add some clip art.

        If this is the best use case that exists for AI image generation, I'm only further convinced the tech is at best largely useless.

        • JumpCrisscross 8 hours ago ago

          > not seeing how it takes more than 5 minutes to type up an itinerary

          Because I’ll then spend hours playing with the typography (because it’s fun) and making it look like whatever design style I’ve most recently read about (again, because it’s fun) and then fighting Word or Latex because I don’t actually know what I’m doing (less fun). Outsourcing it is the right move, particularly if someone else is handling requests for schedules to be adjusted. An AI handles that outsourcing quicker for low-value (but frequent) tasks.

          > If this is the best use case that exists for AI image generation

          I’ve also had good luck sketching a map or diagram and then having the AI turn it into something that looks clean.

          Look, 99% of my use cases are e.g. making my cat gnaw on the Tetons or making a concert of lobsters watching Lady Gaga singing “I do it for the claws” or whatever so I can send two friends something stupid at 1AM. But there does appear to be a veneer of productivity there, and worst case it makes the world look a bit nicer.

          • breezybottom 6 hours ago ago

            You might not be able to tell how bad the AI slop looks, but I guarantee some of your friends can. AI is awful at maps and diagrams.

      • jll29 9 hours ago ago

        You are kidding, right?

        It's good that my friends don't make a coffee date feel like a board meeting (with an agenda shared by post 14 working days ahead of the meeting, form for proxy voting attached).

      • reaperducer 10 hours ago ago

        I don't care how many times you write "cute," having my vacation time programmed with that level of granularity and imposed obligation sounds like the definition of "dystopian."

        If I got one of your cute schedule cards while visiting you, I'd tear it up, check into a cheap motel, and spend the rest of my vacation actually enjoying myself.

        Edit: I'm not an outlier here. There have even been sitcom episodes about overbearing hosts over-programming their guests' visits, going back at least to the Brady Bunch.

        • JumpCrisscross 10 hours ago ago

          > If I got one of your cute schedule cards while visiting you, I'd tear it up, check into a cheap motel, and spend the rest of my vacation actually enjoying myself

          Okay. I'd be confused why you didn't voice up while we were planning everything as a group, but those people absolutely exist. (Unless it's someone's, read: a best friend or my partner's, birthday. Then I'm a dictator and nobody gets a choice over or preview of anything.)

          I like to have a group activity planned on most days. If we're going to drive to get in an afternoon hike in before a dinner reservation (and if I have 6+ people in town, I need a dinner reservation because no I'm not coooking every single evening), or if I've paid for a snowmobile tour or a friend is bringing out their telescope for stargazing, there are hard no-later-than departure times to either not miss the activity or be respectful of others' time.

          My family used to resolve that by constantly reminding everyone the day before and morning of, followed by constantly shouting at each other in the hours and minutes preceding and–inevitably–through that deadline. I prefer the way I've found. If someone wants to fuck off from an activity, myself included, that's also perfectly fine.

          (I also grew up in a family that overplanned vacations. And I've since recovered from the rebound instinct, which involves not planning anything and leaving everything to serendipity. It works gorgeously, sometimes. But a lot of other times I wonder why I didn't bother googling the cool festival one town over before hand, or regretted sleeping in through a parade.)

          > There have even been sitcom episodes about overbearing hosts over-programming their guests' visits

          Sure. And different groups have different strokes. When it comes to my friends and I, generally speaking, a scheduled activity every other day with dinners planned in advance (they all get hangry, every single fucking one of them) works best.

    • gustavus 11 hours ago ago

      I'm working on an edutech game. Before I would've had much less of a product because I don't have the budget to hire an artist and it would've been much less interactive but because of this I'm able to build a much more engaging experience so that's one thing. For what it's worth.

    • NikolaNovak 10 hours ago ago

      While I agree with you, hacker news audience is not in the middle of the bell curve.

      I get this sounds elitist - but tremendous percentage of population is happily and eagerly engaging with fake religious images, funny AI videos, horrible AI memes, etc. Trying to mention that this video of puppy is completely AI generated results in vicious defense and mansplaining of why this video is totally real (I love it when video has e.g. Sora watermarks... This does not stop the defenders).

      I agree with you that human connection and artist intent is what I'm looking for in art, music, video games, etc... But gawd, lowest common denominator is and always has been SO much lower than we want to admit to ourselves.

      Very few people want thoughtful analysis that contradicts their world view, very few people care about privacy or rights or future or using the right tool, very few people are interested in moral frameworks or ethical philosophy, and very few people care about real and verifiable human connection in their "content" :-/

      • Peritract 7 hours ago ago

        HN is absolutely not more critical of AI output than the norm.

        It's been true for various technologies that HN (and tech audiences in general) have a more nuanced view, but AI flips the script on that entirely. It's the tech world who are amazed by this, producing and being delighted by endless blogposts and 7-second concept trailers.

      • ryandrake 9 hours ago ago

        I recently shoulder-surfed a family member scrolling away on their social media feed, and every single image was obvious AI slop. But it didn't matter. She loved every single one, watched videos all the way through, liked and commented on them... just total zombie-consumption mode and it was all 100% AI generated. I've tried in the past pointing out that it's all AI generated and nothing is real, and they simply don't care. People are just pac-man gobbling up "content". It's pretty sad/scary.

    • slibhb 10 hours ago ago

      > Like, in terms of art, it's discarded (art is about humans)

      If a work of art is good, then it's good. It doesn't matter if it came from a human, a neanderthal, AI, or monkeys randomly typing.

      • Jtarii 9 hours ago ago

        The connection with the artist, directly, or across space and time, is a critical part of any artwork. It is one human attempting to communicate some emotional experience to another human.

        When I watch a Lynch film I feel some connection to the man David Lynch. When I see a AI artwork, there is nothing to connect with, no emotional experience is being communicated, it is just empty. It's highest aspiration is elevator music, just being something vaguely stimulating in the background.

      • papa_bear 10 hours ago ago

        Provenance is part of the work. If a roomful of monkeys banged out something that looked like anything, I'd absolutely hang it on my wall. I would not say the same for 99% of AI generated art.

      • avaer 10 hours ago ago

        Whether art is considered good is in practice highly contextual. One of those contexts is who (what) made it.

    • papichulo2023 11 hours ago ago

      Seems good enough to generate 2D sprites. If that means a wave of pixel-art games I count it as a net win.

      I dont think gamers hate AI, it is just a vocal miniority imo. What most people dislike is sloppy work, as they should, but that can happen with or without AI. The industry has been using AI for textures, voices and more for over a decade.

      • vunderba 10 hours ago ago

        > Seems good enough to generate 2D sprites.

        It’s really not. That's actually a pet peeve of mine as someone who used to spent a lot of time messing with pixel art in Aseprite.

        Nobody takes the time to understand that the style of pixel art is not the same thing as actual pixel art. So you end up with these high-definition, high-resolution images that people try to pass off as pixel art, but if you zoom in even a tiny bit, you see all this terrible fringing and fraying.

        That happens because the palette is way outside the bounds of what pixel art should use, where proper pixel art is generally limited to maybe 8 to 32 colors, usually.

        There are plenty of ways to post-process generative images to make them look more like real pixel art (square grid alignment, palette reduction, etc.), but it does require a bit more manual finesse [1], and unfortunately most people just can’t be bothered.

        [1] - https://github.com/jenissimo/unfake.js

      • loudandskittish 10 hours ago ago

        There are already more games being released on Steam than anyone can keep up with, I'm not sure how adding another "wave" on top of it helps.

      • tiagod 11 hours ago ago

        AI for textures for over a decade? What AI?

        • papichulo2023 10 hours ago ago

          Efros–Leung, PatchMatch? Nearest neighbours was "AI" before difusion models.

      • Thonn 11 hours ago ago

        Are you kidding? I think I see more vitriol for AI in gaming communities than anywhere else. To the point where steam now requires you to disclose its usage

        • papichulo2023 10 hours ago ago

          Crimson Desert failed to disclose on release and (almost) nobody cared, gamers kept buying it.

    • NetOpWibby 11 hours ago ago

      The Human Renaissance is something I've been thinking of too and I hope it comes to pass. Of course, I feel like societally, things are gonna get worse for a lot of folks. You already see it in entire towns losing water or their water becoming polluted.

      You'd think these kickbacks leaders of these towns are getting for allowing data centers to be built would go towards improving infrastructure but hah, that's unrealistic.

      WTF is that unrealistic? SMH

      • Lerc 10 hours ago ago

        >You already see it in entire towns losing water or their water becoming polluted

        Do you have any references for such cases? I have seen talk of such thing at risk, but I am unaware of any specific instances of it occuring

        • NetOpWibby 5 hours ago ago

          I know I've seen such a story on HN before, you can probably find it by searching for "water" and "data center/AI."

    • underlipton 11 hours ago ago

      >Like, in terms of art, it's discarded (art is about humans)

      I dunno how long this is going to hold up. In 50 years, when OpenAI has long become a memory, post-bubble burst, and a half-century of bitrot has claimed much of what was generated in this era, how valuable do you think an AI image file from 2023 - with provenance - might be, as an emblem and artifact of our current cultural moment, of those first few years when a human could tell a computer, "Hey, make this," and it did? And many of the early tools are gone; you can't use them anymore.

      Consider: there will never be another DallE-2 image generation. Ever.

    • RIMR 11 hours ago ago

      My only actual use of image or video AI tools is self-entertainment. I like to give it prompts and see the results it gives me.

      That's it. I can't think of a single actual use case outside of this that isn't deliberately manipulative and harmful.

    • colechristensen 11 hours ago ago

      >In general, I think people are starting to realize that things generated without effort are not worth spending time with

      Agreed mostly, BUT

      I'm building tools for myself. The end goal isn't the intermediate tool, they're enabling other things. I have a suspicion that I could sell the tools, I don't particularly want to. There's a gap between "does everything I want it to" and "polished enough to justify sale", and that gap doesn't excite me.

      They're definitely not generated without effort... but they are generated with 1% of the human effort they would require.

      I feel very much empowered by AI to do the things I've always wanted to do. (when I mention this there's always someone who comes out effectively calling me delusional for being satisfied with something built with LLMs)

    • iLoveOncall 11 hours ago ago

      Porn and memes. Obviously. This is all that Stable Diffusion has been used for since it was released.

    • ArchieScrivener 11 hours ago ago

      I completely disagree, this replaces art as a job. Why does human art need monetary feedback to be shared? If people require a paycheck to make art then it was never anything different than what Ai generated images are.

      As for advertising being depressing - its a little late to get up on the high horse of anti-Ads for tech after 2 decades of ad based technology dominating everything. Go outside, see all those bright shiny glittery lights, those aren't society created images to embolden the spirit and dazzle the senses, those are ads.

      North Korea looks weird and depressing because the don't have ads. Welcome to the west.

    • tomrod 11 hours ago ago

      AI loopidity rearing it's head. Just send the bullet points that we all want anyway, right?! Stop sending globs of text and other generated content!

  • agnishom 9 hours ago ago

    I don't know how this benefits humanity. In what way was ChatGPT Images 1.0 not already good enough? Perhaps some new knowledge was created in the process?

  • Melatonic 13 hours ago ago

    Can it generate anything high resolution at increased cost and time? Or is it always restricted?

  • jwpapi 10 hours ago ago

    Why is it all so asian?

    • twobitshifter 8 hours ago ago

      Having 60% of the world’s population might do that.

  • XCSme 10 hours ago ago

    Oh wow, scrolling through the page on mobile makes me dizzy

  • RyanJohn 7 hours ago ago

    Oh my god, it's very nice!

  • dahuangf 2 hours ago ago

    good job

  • apparent 9 hours ago ago

    I find the video to be very annoying. Am I supposed to freeze frame 4x per second to be able to see whether the images are actually good? I've never before felt stressed watching a launch video.

    • Havoc 9 hours ago ago

      Yeah same. At first I thought they're using it to conceal quality, but pausing it they do actually look really good, so strange choice.

      Maybe it's meant to convey pace & hype

      • apparent 8 hours ago ago

        Maybe so, but to me it conveys a headache.

  • bitnovus 14 hours ago ago

    great obfuscation idea - hidden message on a grain of rice

  • ibudiallo 11 hours ago ago

    And here I was proud of myself, having taught my mom and her friends how to discern real from fakes they get on WhatsApp groups. Another even more powerful tool for scammers. I'm taking a break.

    • bananaflag 4 hours ago ago

      I told my mom not to believe anything unless she trusts the source. The way people always did with text.

    • XorNot 11 hours ago ago

      IMO you're fighting the wrong battle: there'll always be a new model.

      But the broader concept of fake news and the manufactured nature of media and rhetoric is much more relevant - e.g. whether or not something's AI is almost immaterial to the fact that any filmed segment does not have to be real or attributed to the correct context.

      Its an old internet classic just to grab an image and put a different caption on it, relying on the fact no one can discern context or has time to fact check.

  • gfody 11 hours ago ago

    there's something funny going on with the live stream audio

  • szmarczak 14 hours ago ago

    Wow, the difference between AI and non-AI images collapses. I hate the future where I won't be able to tell the difference.

    • Flere-Imsaho 14 hours ago ago

      I wake up everyday, read the tech news, and usually see some step change in AI or whatever. It's wild to think I'm living through such a massive transformation in my lifetime. The future of tech is going to be so different from when I was born (1980), I guess this is how people born in 1900 felt when they got to see man land on the moon?

      > Wow, the difference between AI and non-AI images collapses. I hate the future where I won't be able to tell the difference.

      Image generation is now pretty much "solved". Video will be next. Perhaps things will turn out the same as chess: in that even though chess was "solved" by IBM's Deep Blue, we still value humans playing chess. We value "hand made" items (clothes, furniture) over the factory made stuff. We appreciate & value human effort more than machines. Do you prefer a hand-written birthday card or an email?

      • toraway 13 hours ago ago

        "Solved" seems a tad overstated if you scroll up to Simonw's Where's Waldo test with deformed faces plus a confabulated target when prompted for an edit to highlight the hidden character with an arrow.

        • Flere-Imsaho 13 hours ago ago

          It's "solved" in that we have a way forward to reduce the errors down to 0.00001% (a number I just made up). Throwing more compute/time/money at these problems seems to reduce that error number.

      • abraxas 13 hours ago ago

        As someone born in 1975 I always felt until the last couple of years that I had been stuck in a long period of stagnation compared to an earlier generation. My grandmother who was born in the 1910s got to witness adoption of electricity, mass transit, radio, television, telephony, jet flights and even space exploration before I was born.

        Feels like now is a bit of a catchup after pretty tepid period that was most of my life.

        • cubefox 11 hours ago ago

          You will likely witness strongly superhuman AI, which dwarfs any changes your grandmother saw.

      • dag100 13 hours ago ago

        Chess exists solely for the sake of the humans playing it. Even if machines solved chess, people would rather play chess against a person than a machine because it is a social activity in a way. It's like playing tennis versus a person compared to tennis against a wall.

        Photographs, videos, and digital media in general, in contrast, are used for much, much more than just socializing.

    • gekoxyz 14 hours ago ago

      Well, for some of these images for the first time I can't tell that they are AI generated

  • mcfry 7 hours ago ago

    How hard is it to have a video player with a fucking volume toggle?

  • esafak 14 hours ago ago
    • rqa129 14 hours ago ago

      Thanks, all displayed images look horrible and artificial. This will fail like Sora.

      • gekoxyz 14 hours ago ago

        Hard disagree on this, I was coming here to comment that this is the first time I really can't tell that some of the photos are AI generated.

      • furyofantares 14 hours ago ago

        I felt the same, particularly with the diagrams / magazines anyway.

        I don't think it'll fail like Sora though. gpt-image-1.5 didn't fail.

      • livinglist 10 hours ago ago

        Denial is real…

      • QuantumGood 12 hours ago ago

        Your single other comment is simplistic hyperbole as well, so this is presumably a bot account.

  • bitnovus 14 hours ago ago

    No gpt-5.5

  • dzonga 11 hours ago ago

    for video game assets this is massive.

    but in general though - will people believe in anything photographic ?

    imagine dating apps, photographic evidence.

    I'm guessing we're gonna reach a point where - you fuck up things purposely to leave a human mark.

    • telman17 5 hours ago ago

      > for video game assets this is massive.

      Storefronts like Steam require disclosing use of AI assets for art. In most indie dev spaces, devs are scolded for using AI art in their games. I wonder if this perspective will change in a few years.

    • squidsoup 11 hours ago ago

      > but in general though - will people believe in anything photographic ?

      Hopefully film makes a come back.

  • andai 10 hours ago ago

    lol at the fake handwritten homework assignment. Know your customer!

  • OutOfHere 9 hours ago ago

    ChatGPT image generation is and has been horrific for the simple reason that it rejects too many requests. This hasn't changed with the new model. There are too many legal non-adult requests that are rejected, not only for edits, but also for original image generation. I'd rather pay to use something that actually works.

  • davikr 10 hours ago ago

    It definitely lost the characteristic slop look.

  • irishcoffee 10 hours ago ago

    This is so stupid. As a free OSS tool it’s amazing. Paying money for this is fucking stupid. How blind are we all to now before this tech?

  • rqa129 14 hours ago ago

    Can it generate Chibi figures to mask the oligarchy's true intentions on Twitter and make them more relatable?

  • volkk 14 hours ago ago

    the guys presenting are probably all like 25x smarter than I am but good god, literally 0 on screen presence or personality.

    • sho_hn 14 hours ago ago

      That's a trained skill, and they presumably have focused on other skills.

      • brcmthrowaway 14 hours ago ago

        Yeah, skills to make them a cool 10mn a year

      • volkk 14 hours ago ago

        eh, i don't think personalities are trained. on screen presence for sure, but you'd see right through it IRL.

        • dymk 9 hours ago ago

          The corporate espionage industry would disagree

    • OsrsNeedsf2P 11 hours ago ago

      I liked it that way, felt more authentic to see the noobs

    • E-Reverance 14 hours ago ago

      I think its endearing

    • Aethelwulf 11 hours ago ago

      didn't think that sam guy was that bad

  • minimaxir 14 hours ago ago

    HN submission for a direct link to the product announcement which for some reason is being penalized by the HN algorithm: https://news.ycombinator.com/item?id=47853000

    • dang 9 hours ago ago

      (We eventually merged the threads hither)

  • simonw 14 hours ago ago

    Suggest renaming this to "OpenAI Livestream: ChatGPT Images 2.0"

    • dang 11 hours ago ago

      (We've since merged the threads and moved the livestream link to the toptext)

    • I_am_tiberius 14 hours ago ago

      or "How we make money with your images 2.0".

  • sho_hn 14 hours ago ago

    In 5 years and 3 months between DALL-E and Images 2.0 we've managed to progress from exuberant excitement to jaded indifference.

    • nba456_ 10 hours ago ago

      Who's 'we'? Speak for yourself!

    • kibibu 12 hours ago ago

      Because we are all seeing the harm these tools are being used for.

      It's just another step into hell.

  • welder 9 hours ago ago

    Introducing DeepFakes 2.0 /s

  • zb3 14 hours ago ago

    Image generation? Hmm, would be cool if OpenAI also made a video-generation model someday..

    • incognito124 14 hours ago ago

      If only there was a social network with solely AI generated videos, I would pay literal money for it...

      • allenbina 7 hours ago ago

        If I may address this with both skepticism and curiosity, why. I think I speak for everyone when I say I would pay to go back to facebook 2018. No algorithm, no ai.

        • Bigpet 4 hours ago ago

          Are you being sincere? This is one layer of irony too much for my brain to comprehend.

          The person you're replying to is making a joke about OpenAI shutting down Sora their video generation "social media" app recently.

  • biosubterranean 11 hours ago ago

    Oh no.

  • ai4thepeople 11 hours ago ago

    Each day when my AI girlfriend wakes me up and shows me the latest news, I feel: This is it! We are living in a revolution!

    Never before in history did humanity have the possibility of seeing a picture of a pack of wolves! The dearth of photographs has finally been addressed!

    I told my AI girlfriend that I will save money to have access to this new technology. She suggested a circular scheme where OpenAI will pay me $10,000 per year to have access to this rare resource of 21th century daguerreotype.

  • green_wheel 5 hours ago ago

    Well artists, you guys had a good run thank you for your service.

  • manishfp 5 hours ago ago

    Goated release tbh. The text work inside the images are nice

  • aliljet 14 hours ago ago

    I am hopeful that OpenAI will potentially offer clarity on their loss-leading subscription model. I'd prefer to know the real cost of a token from OpenAI as opposed to praying the venture-funded tokens will always be this cheap.

  • tkgally 9 hours ago ago

    I had it produce a two-page manga with Japanese dialogue. Nearly perfect:

    https://www.gally.net/temp/20260422-chatgpt-images-2-example...

  • prvc 6 hours ago ago

    I hope they will consider releasing DALL-E 2 publicly, now that there has been so much progress since it was unveiled. It had a really nice vibe to it, so worth preserving.

    • andy_ppp 6 hours ago ago

      Yes, I’ve always thought of AI companies as sentimental. They will definitely do this :-/

      • prvc 5 hours ago ago

        That's why I want it; their motives for doing it, should they decide to, would presumably be different.

  • Danox 9 hours ago ago

    Sam Altman in his meeting with Tim Cook two and a half years ago give me money. I think it’ll take $150 billion dollars, Tim Cook well here’s what we’re going to do, this is what I think it’s worth…

    Later Google tried the same thing, Apple we will give you a $1 billion dollar a year refund, what’s changed in two and a half years?