FLUX is fast and it's open source

(replicate.com)

216 points | by smusamashah 16 hours ago ago

100 comments

  • sorenjan 14 hours ago ago

    Text to image models feels inefficient to me. I wonder if it would be possible and better to do it in separate steps, like text to scene graph, scene graph to semantically segmented image, segmented image to final image. That way each step could be trained separately and be modular, and the image would be easier to edit instead of completely replace it with the output of a new prompt. That way it should be much easier to generate stuff like "object x next to object y, with the text foo on it", and the art style or level of realism would depend on the final rendering model which would be separate from the prompt adherence.

    Kind of like those video2video (or img2img on each frame I guess) models where they enhance the image outputs from video games:

    https://www.theverge.com/2021/5/12/22432945/intel-gta-v-real... https://www.reddit.com/r/aivideo/comments/1fx6zdr/gta_iv_wit...

    • miki123211 7 hours ago ago

      In general, it has been shown time and time again that this approach fails for neural network based models.

      If you can train a neural network that goes from a to b and a network that goes from b to c, you can usually replace that combination with a simpler network that goes from a to c directly.

      This makes sense, as there might be information in a that we lose by a conversion to b. A single neural network will ensure that all relevant information from a that we need to generate c will be passed to the upper layers.

      • sorenjan 6 hours ago ago

        Yes this is true, you do lose some information between the layers, and this increased expressibility is the big benefit of using ML instead of classic feature engineering. However, I think the gain would be worth it for some use cases. You could for instance take an existing image, run that through a semantic segmentation model, and then edit the underlying image description. You could add a yellow hat to a person without regenerating any other part of the image, you could edit existing text, change a person's pose, you could probably more easily convert images to 3D, etc.

        It's probably not a viable idea, I just wish for more composable modules that lets us understand the models' representation better and change certain aspects of them, instead of these massive black boxes that mix all these tasks into one.

        I would also like to add that the text2image models already have multiple interfaces between different parts. There's the text encoder, the latent to pixel space VAE decoder, controlnets, and sometimes there a separate img2imgstyle transfer at the end. Transformers already process images patchwise, but why does those patches have to be even square patches instead of semantically coherent areas?

      • smrtinsert an hour ago ago

        It's my understanding an a-c will usually be bigger parameter wize and more costly to train

    • kqr 11 hours ago ago

      Isn't this essemtially the approach to image recognition etc. that failed for ages until we brute forced it with bigger and deeper matrices?

      It seems sensible to extract features and reason about things the way a human would, but it turns out its easier to scale pattern matching purely done by computer.

      • WithinReason 10 hours ago ago
        • selvan 9 hours ago ago

          From the PDF - "One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are "search" and "learning".

          The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done."

        • nuancebydefault 7 hours ago ago

          If I would take the Lesson literally, we should not even study text to image. We should study how a machine with limitless cpu cycles would make our eyes see something we are currently thinking of.

          My point being, optimization or splitting up int subs, before handing over the problem to the machine, makes sense.

          • stoniejohnson an hour ago ago

            I think the bitter lesson implies that if we could study/implement "how a machine with limitless cpu cycles would make our eyes see something we are currently thinking of" then it would likely lead to a better result than us using hominid heuristics to split things into sub-problems that we hand over to the machine.

            • nuancebydefault 12 minutes ago ago

              The technology to probe brains and visual related neurons exists today. With limitless cpu cycles we would for sure be able to do make us see whatever we think about.

      • nuancebydefault 7 hours ago ago

        A problem with image recognition i can think of, is that any rude categorization of the image, which is millions of pixels will make it less accurate.

        With image generation on the other hand, which starts from a handful of words, we can first do some text processing into categories, such as objects vs people, color vs brightness, environment vs main object, etc.

      • nerdponx 4 hours ago ago

        You could imagine doing it with 2 specialized NNs, but then you have to figure out a huge labeled dataset of scene graphs. The problem fundamentally is that any "manual" feature engineering is not going to be supervised and fitted on a huge corpus, the way the self-learned features are.

    • spencerchubb 13 hours ago ago

      That's essentially what diffusion does, except it doesn't have clear boundaries between "scene graph" and "full image". It starts out noisy and adds more detail gradually

      • WithinReason 11 hours ago ago

        That's true, the inefficiency is from using pixel-to-pixel attention at each stage. It the beginning low resolution would be enough, even at the end high resolution is only needed at the pixel's neighborhood

    • ZoomZoomZoom 12 hours ago ago

      The issue with this is there's a false assumption that an image is a collection of objects. It's not (necessarily).

      I want a picture of frozen cyan peach fuzz.

      • llm_trw 12 hours ago ago

        https://imgur.com/ayAWSKr

        Prompt: frozen cyan peach fuzz, with default settings on a first generation SD model.

        People _seriously_ do not understand how good these tools have been for nearly two years already.

        • ZoomZoomZoom 7 hours ago ago

          If by people you mean me, then I wasn't clear enough in my comment. The example given implied an image without any objects the GP was talking about, just a uniform texture.

        • sorenjan 5 hours ago ago

          Running that image through Segment Anything you get this: https://imgur.com/a/XzCanxx

          Imagine if instead of generating the RGB image directly the model would generate something like that, but with richer descriptive embeddings on each segment, and then having a separate model generating the final RGB image. Then it would be easy to change the background, rotate the peach, change color, add other fruits, etc, by editing this semantic representation of the image instead of wrestling with the prompt to try to do small changes without regenerating the entire image from scratch.

        • thomashop 10 hours ago ago
          • corn13read2 8 hours ago ago

            can do this with any image generation model.

            Disclaimer: I'm not behind any

    • Zambyte 4 hours ago ago

      You seem to be describing ComfyUI to me. You can definitely do this kind of workflow with ComfyUI.

    • teh_infallible 10 hours ago ago

      I am hoping that AI art tends towards a modular approach, where generating a character, setting, style, and camera movement each happens in its own step. It doesn’t make sense to describe everything at once and hope you like what you get.

      • sorenjan 6 hours ago ago

        Definitely, that would make much more sense seeing how content is produced by people. Adjust the technology to how people want to use it instead of forcing artists becoming prompt engineers and settling for something close enough what they want.

        At the very least image generators should output layers, I think the style component is already possible with the img2img models.

    • seydor 10 hours ago ago

      Neural networks will gradually be compressed to their minimum optimal size (once we know how to do that)

  • trickstra 10 hours ago ago

    Non-commercial is not open-source, because if the original copyright holder stops maintaining it, nobody else can continue (or has to work like a slave for free). Open-source is about what happens if the original author stops working on it. Open-source gives everyone the license to continue developing it, which obviously means also the ability to get paid. Don't call it open-source if this aspect is missing.

    Only the FLUX.1 [schnell] is open-source (Apache2), FLUX.1 [dev] is non-commercial.

    • uxhacker 5 hours ago ago

      There is OpenFLUX.1 which is a fine tune of the FLUX.1-schnell model that has had the distillation trained out of it. OpenFLUX.1 is licensed Apache 2.0. https://huggingface.co/ostris/OpenFLUX.1/

    • starfezzy 9 hours ago ago

      Doesn’t open source mean the source is viewable/inspectable? I don’t know any closed source apps that let you view the source.

      • miki123211 7 hours ago ago

        > Doesn’t open source mean the source is viewable/inspectable?

        According to the OSI definition, you also need a right to modify the source and/or distribute patches.

        > I don’t know any closed source apps that let you view the source.

        A lot of them do, especially in the open-core space. THe model is called source-available.

        If you're selling to enterprises and not gamers, that model makes sense. What stops large enterprises from pirating software is their own lawyers, not DRM.

        This is why you can put a lot of strange provisions into enterprise software licenses, even if you have little to no way to enforce these provisions on a purely technical level.

      • havaker 8 hours ago ago

        Open source usually means that you are able to modify and redistribute the software in question freely. However between open and closed, there is another class - source-available software. From its wikipedia page:

        > Any software is source-available in the broad sense as long its source code is distributed along with it, even if the user has no legal rights to use, share, modify or even compile it.

      • aqme28 6 hours ago ago

        Website frontends are always source viewable, but that is not OSS.

  • thomashop 10 hours ago ago

    If you want to play with FLUX.schnell easily, type the prompt into a Pollinations URL:

    https://pollinations.ai/p/a_donkey_holding_a_sign_with_flux_...

    https://pollinations.ai/p/a_donkey_holding_a_sign_with_flux_...

    https://pollinations.ai/p/Minimalist%20and%20conceptual%20ar...

    It's incredible how fast it is. We generate 8000 images every 30 minutes for our users using only three L40S GPUs. Disclaimer: I'm behind Pollinations

    • peterpans01 9 hours ago ago

      The "only" word sounds quite expensive for most of us.

      • Kiboneu 5 hours ago ago

        He started a whole business to help pay the installments.

      • FridgeSeal 6 hours ago ago

        “I have successfully destabilised many countries with only a few tanks”.

  • jsemrau 12 hours ago ago

    My favorite thing to do with Flux is create images with a white background for my substack[1] because the text following is amazing and I can communicate something visually through the artwork as well.

    [1]https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_...

    • ruthmarx 11 hours ago ago

      That example you gave is a good reason why artists get pissed off IMO. The LLM is clearly aping some artists specific style, and now missing out on paid work as a result.

      Not sure I have an opinion on that, technology marches on etc, but it is interesting.

      • jsemrau 11 hours ago ago

        I understand your point, but in 0% of all cases would I hire an artist to create imagery for my personal blog. Therefore, I would think that market doesn't exist.

        • earthnail 8 hours ago ago

          However, the blogs or newspapers or print outlets that used to hire them hired them because you couldn’t- it was a differentiator.

          That differentiator is gone, and as such won’t pay for it anymore. They’ll just use the same AI as you.

          This destroys the existing market of the artist.

          To be clear, my comment isn’t meant as a judgment, just as market analysis.

          • jsemrau 8 hours ago ago

            I think it does not take into consideration how much thought and expertise goes into design work. Have a look at the recent controversy about the live-action shooter "concord" that failed spectacularly mainly due to bad character design.

            Here are two videos that explain that well. I don't think I would ever be capable of designing with that degree of purpose given a generative AI tool.

            [1] https://www.youtube.com/watch?v=mVyXUMJLzE0 [2] https://www.youtube.com/watch?v=5eymH15AfAU

            • ionwake 3 hours ago ago

              Thanks for the links Im glad there are people who are experts at character design. For my untrained eyes it just looks like all of the characters are muddy coloured ( washed out greens brown etc ) AND they are pretty much all incredibly ugly. I think I saw one that atleast looked fashionable, the black sniper female.

              The older I get the more concerned I get that the larger the team that makes decisions the worse the decisions are, whats the word for this? Is there any escape? Teamfortress 2, took years and teams to build, but it was just perfect.

              I heard they had a flat structure which is even more confusing as to how they attained such an excellent product.

              • hansvm an hour ago ago

                Bureaucracy and hierarchy are much more damaging to good products than a large team. The flat structure and long timelines are how they overcame the limitations of a large team.

          • smrtinsert an hour ago ago

            This is about as realistic as replacing coders with ai tools today. High level content organizations demand creative precision that even models like Flux can ape but not replace. Maybe to a non-artist it would be comparable, but to a creative team its not close.

        • ruthmarx 10 hours ago ago

          Yeah, I get that completely, I'm the same way. I just think it's interesting. It's kind of the same argument as piracy, since most people wouldn't pay for what they download if it wasn't free.

          • ilkke 9 hours ago ago

            What is different in this case is that large companies are very likely looking to replace artists with ai, which is a huge potential impact. Piracy never had such risks

            • jsemrau 8 hours ago ago

              I think this will only happen if you could selectively replace parts within an image selectively and reliably. There are still major problems even in Photoshops genaI application. For example, it is not possible select the head of a person on a picture and then type "smile" to make the face smile. We might get there eventually.

          • jsemrau 10 hours ago ago

            I'd rather think it's the same argument as open-source and public domain. Currently, I am researching an agent that ReAct's through a game of TicTacToe. I am using a derivative of the open-source transformer's prompt

            • ruthmarx an hour ago ago

              > I'd rather think it's the same argument as open-source and public domain.

              In the context of the point I made, it's definitely more similar to piracy, since the point was about taking advantage of something that if not free people would not pay for.

      • pajeets 10 hours ago ago

        Dont care about artists opinion on rest of using AI tools instead of not paying them because I couldnt and wouldnt so theres no demand in the first place.

        All I wanna know is the prompt that was used to generate the art speaking of which i wanna know how to create cartoony images like that OP

        • Scrapemist 4 hours ago ago

          Like you don’t care about punctuation marks.

    • slig 5 hours ago ago

      Could you share the prompt? Thanks.

      • jsemrau an hour ago ago

        The prompt is actually not that interesting.

        "A hand-drawing of a scientific middle-aged man in front of a white background. The man is wearing jeans and a t-shirt. He is thinking a bubble stating "What's in a ReAct JSON prompt?" In the style of European comic book artists of the 1970s and 1980s."

        Finding the right seed and model configuration is the more difficult part.

  • vunderba 14 hours ago ago

    Flux is the leading contender for a locally hosted generative systems in terms of prompt adherence, but the omnipresent shallow depth of field is irritatingly hard to get rid of.

    • cranium 12 hours ago ago

      I guess it's optimized for artsy images?

      • AuryGlenz 11 hours ago ago

        They almost certainly did DPO it, so that would have an effect. It was also probably just trained more on professional photography than cell phone pics.

        I’ve found it odd how there’s a segment of the population that hates a shallow depth of field now, as they’re so used to their phone pictures. I got in an argument on Reddit (sigh) with someone who insisted that the somewhat shallow depth of field that SDXL liked to do by default was “fake.”

        As in, he was only ever exposed to it through portrait mode and the like on phones and didn’t comprehend that larger sensors simply looked like that. The images he was posting that looked “fake” to him looked to be about a 50mm lens at f/4 on a full frame camera at a normal portrait distance, so nothing super shallow either.

        • vunderba 3 hours ago ago

          That's pretty funny. It reminds me of if you grew up watching movies with the standard 24 fps - trying to watch films at 60fps later felt unnatural and fake.

          I'll say I'm okay with DOF - it just feels (subjectively to me) like its incredibly exaggerated in Flux. The workarounds have mostly been prompt based adding everything from "gopro capture" to "on flickr in 2007" but this approach feels like borderline alchemy in terms of how reliable it is.

        • Adverblessly 8 hours ago ago

          As a DoF "hater", my problem with it is that DoF is just the result of a sensor limitation (when not used artistically etc.), not some requirement of generating images. If I can get around that limitation, there's very little motivation to maintain that flaw.

          In the real world, if I see a person at the beach, I can look at the person and see them in perfect focus, I can then look at the ocean behind them and it is also in perfect focus. If you are an AI generating an image for me, I certainly don't need you to tell me on which parts of that image I'm allowed to focus, just let me see both the person and the ocean (unless I tell you to give me something artsy :)).

          • tcrenshaw 4 hours ago ago

            While you could look at DoF as a sensor limitation, most photographers use it as an artistic choice. Sure, I could take a pic at f/16 and have everything within the frame in focus, but maybe the background is distracting and takes away from the subject. I can choose how much background separation I want; maybe just a touch at f/8, maybe full on blue at f/1.2

      • llm_trw 12 hours ago ago

        Give it another month and it will be porn, just like sdxl.

        • Zopieux 6 hours ago ago

          What are you talking about, the model is months old, it's already all porn - and that's okay.

  • thierryzoller 8 hours ago ago

    They point to their comparison page to claim similar quality. First off it's very clear that the details are way less, but worse, look at the example "Three-quarters front view of a yellow 2017 Corvette coming around a curve in a mountain road and looking over a green valley on a cloudy day."

    The Original model shows the FRONT, the speed version shows the BACK of the corvette. It's a completely different picture. This is not similar but strikingly different.

    https://flux-quality-comparison.vercel.app/

  • CosmicShadow 14 hours ago ago

    I just cancelled my Midjourney subscription, it feels like it's fallen too far behind for the stuff I'd like to do. Spent a lot of time considering using Replicate as well as Ideogram.

    • simonjgreen 12 hours ago ago

      I have been questioning the value beyond novelty as well recently. I’m curious if you replaced it with another tool or simply don’t derive value from those things?

    • pajeets 10 hours ago ago

      never used midjourney because it had that signature look and bad with hands, feet, letters

      crazy not even a year has past since Emad's downfall a local open source and superior model drops

      which just shows how little moat these companies have and are just lighting cash on fire which we benefit from

      • rolux 7 hours ago ago

        > crazy not even a year has past since Emad's downfall a local open source and superior model drops

        > which just shows how little moat these companies have

        Flux was developed by the same people that made Stable Diffusion.

      • aqme28 8 hours ago ago

        Flux has a signature look too, it’s just a different one.

      • keiferski 9 hours ago ago

        It’s very easy to turn off the default Midjourney look.

  • jncfhnb 4 hours ago ago

    Does this translate to gains on local with comfy

  • 112233 10 hours ago ago

    Does someone know what FLUX 1.1 has been trained on? I generated almost hundred images on the pro model using "camera filename + simple word" two word prompts, and it all looks like photos from someones phone. Like, unless it has text I would not even stop to consider any of these images AI. They sometimes look cropped. A lot of food pictures, messy tables and appartments etc.

    Did they scrape public facebook posts? Snapchat? Vkontakte? Buy private images from onedrive/dropbox? If I put as the second word a female name, it almost always triggers nsfw filter. So I assume images in the training set are quite private.

    See for yourself (autoplay music warning):

    people: https://vm.tiktok.com/ZGdeXEhMg/

    food and stuff: https://vm.tiktok.com/ZGdeXEBDK/

    signs: https://vm.tiktok.com/ZGdeXoAgy/

    [edit] Looking at these images feels uneasy, like I am looking at someones private photos. There is not enough "guidance" in a prompt like "IMG00012.JPG forbid" to account for these images, so it must all come from the training data.

    I do not believe FLUX 1.1 pro has radically different training set than these previous open models, even if it is more prone to such generation.

    It feels really off, so, again, is there any info on training data used for these models?

    • smusamashah 10 hours ago ago

      It's not just flux, you can do the same with other models including Stable Diffusion.

      These two reddit threads [1][2] explore this convention a bit.

          DSC_0001-9999.JPG - Nikon Default
          DSCF0001-9999.JPG - Fujifilm Default
          IMG_0001-9999.JPG - Generic Image
          P0001-9999.JPG - Panasonic Default
          CIMG0001-9999.JPG - Casio Default
          PICT0001-9999.JPG - Sony Default
          Photo_0001-9999.JPG - Android Photo
          VID_0001-9999.mp4 - Generic Video
          
          Edit: Also created a version for 3D Software Filenames (all of them tested, only a few had some effects)
          
          Autodesk Filmbox (FBX): my_model0001-9999.fbx
          Stereolithography (STL): Model0001-9999.stl
          3ds Max: 3ds_Scene0001-9999.max
          Cinema 4D: Project0001-9999.c4d
          Maya (ASCII): Animation0001-9999.ma
          SketchUp: SketchUp0001-9999.skp
      
      
      [1]: https://www.reddit.com/r/StableDiffusion/comments/1fxkt3p/co...

      [2]: https://www.reddit.com/r/StableDiffusion/comments/1fxdm1n/i_...

    • jncfhnb 2 hours ago ago

      I highly doubt it’s a product of the raw training dataset because I had the opposite problem. The token for “background” introduced intense blur on the whole image almost regardless of how it was used in the prompt, which is interesting because their prompt interpretation is much better.

      It seems likely that they did heavy calibration of text as well as a lot of tuning efforts to make the model prefer images that are “flux-y”.

      Whatever process they’re following, they’ve inadvertently made the model overly sensitive to certain terms to the point at which their mere inclusion is stronger than a Lora.

      The photos you’re showing aren’t especially noteworthy in the scheme of things. It doesn’t take a lot of effort to “escape” the basic image formatting and get something hyper realistic. Personally I don’t think they’re trying to hide the hyper realism so much as trying to default to imagery that people want.

    • pajeets 10 hours ago ago

      I experienced the same thing, it was so weird i got good results in the beginning and then it "craps out"

      dont know why all the critical comments about flux are being downvoted or flag sure is weird

  • ionwake 3 hours ago ago

    How long does flux take to generate an image if it runs on an m1 macbook pro? Can anyone estimate?

  • swyx 14 hours ago ago

    > We added a new synchronous HTTP API that makes all image models much faster on Replicate.

    ooh why is synchronous fast? i click thru to https://replicate.com/changelog/2024-10-09-synchronous-api

    > Our client libraries and API are now much faster at running models, particularly if a file is being returned.

    ... thanks?

    just sharing my frustration as a developer. try to explain things a little better if you'd like it to stick/for us to become your advocates.

    • weird-eye-issue 14 hours ago ago

      I mean it literally explains why in the second paragraph. It returns the actual file data in the response rather than a URL where you have to make a second request to get the file data

      • swyx 12 hours ago ago

        thats not "making the image models much faster", thats just making getting the image back slightly faster

        • weird-eye-issue 5 hours ago ago

          In all practical senses it is the same thing

        • popalchemist 11 hours ago ago

          The "making the image models much faster" part is model optimizations that are also explained in the post.

          • ErikBjare 10 hours ago ago

            Where? I don't see any explanation of model optimizations in the linked post.

  • swyx 14 hours ago ago

    this comparison for the quantization effect is very nice https://flux-quality-comparison.vercel.app/

    however i do have to ask.. ~2x faster for fp16->fp8 is expected right? its still not as good as the "realtime" or "lightning" options that basically have to be 5-10x faster. whats the ideal product usecase for just ~2x faster?

    • sroussey 11 hours ago ago

      Funny, sometime I like the fast one better.

  • dvrp 13 hours ago ago

    i think we (krea) are faster at the time of writing this comment (but i’ll have to double-check on our infra)

  • LeicaLatte 14 hours ago ago

    Flux is awesome and improving all the time.

  • lolinder 15 hours ago ago
    • dang 14 hours ago ago

      "Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."

      https://news.ycombinator.com/newsguidelines.html

      • lolinder 14 hours ago ago

        I just posted a reply to another person quoting this guideline:

        > In general I think that's true and agree that minor name collision commentary is uninteresting, but in this case we're talking about 11 collisions (and counting) in tech alone, 3 of those in AI/ML and 1 of those specifically in image generation.

        > When it's that bad I think that the frequency of collisions for this name is an interesting topic in its own right.

        I'll respect your judgement on this and not push it further, but this is my thought process here.

    • CGamesPlay 14 hours ago ago

      Well, the word refers to "continuous change", so I guess it's pretty appropriate.

      • achrono 14 hours ago ago

        this name flux

    • dig1 14 hours ago ago

      Also https://github.com/influxdata/flux - "a lightweight scripting language for querying databases and working with data"

    • Conscat 14 hours ago ago

      The first thing that comes to mind when I think "flux" is none of the above too . There's an extremely cool alternative iterator library for C++20 by Tristan Brindle named flux.

      • bigiain 14 hours ago ago

        /me glances across my desk to see my soldering station...

    • roenxi 14 hours ago ago

      And then you can branch out of AI - https://en.wikipedia.org/wiki/The_Flux_Foundation works on public art.

    • swyx 14 hours ago ago

      there are just some names that technology brothers gravitate to like moths to a flame. Orion, Voltron, Galactus...

    • worstspotgain 14 hours ago ago
    • artificialLimbs 14 hours ago ago

      Don't forget Caleb Porzio's new Laravel UI kit.

      https://fluxui.dev/

    • Vt71fcAqt7 14 hours ago ago

      >Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

      • lolinder 14 hours ago ago

        In general I think that's true and agree that minor name collision commentary is uninteresting, but in this case we're talking about 11 collisions (and counting) in tech alone, 3 of those in AI/ML and 1 of those specifically in image generation.

        When it's that bad I think that the frequency of collisions for this name is an interesting topic in its own right.

  • Palmik 3 hours ago ago

    Every time there's a thread about models from Meta, there's a flood of comments clarifying that they aren't really open source.

    So let's also set the record straight for FLUX: only one of the models released is open source -- FLUX schnell -- it's a distillation from the proprietary model that's much harder to work with.

    Meta's Llama models have ironically much more permissive license for all practical intents and purposes and they are also incredibly easy to fine tune (using Meta's own open source framework, or several third party ones), while FLUX schnell isn't.

    I think the open source community should rally behind OpenFLUX or a similar project, which tries to fix the artificial limitations of Schnell: https://huggingface.co/ostris/OpenFLUX.1