• dandi8@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    Just because open source AI is not feasible at the moment is no reason to change the definition of open source.

    • chebra@mstdn.io
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      @dandi8 but you are the one who is changing it. And who said it’s not feasible? Mixtral model is open-source. WizardLM2 is open-source. Phi3:mini is open-source… what’s your point?

      But the license of the model is not related to the license of the data used for training, nor the license for the scripts and libraries. Those are three separate things.

      • dandi8@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 months ago

        https://en.m.wikipedia.org/wiki/Open-source_software

        Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.

        From Mistral’s FAQ:

        We do not communicate on our training datasets. We keep proprietary some intermediary assets (code and resources) required to produce both the Open-Source models and the Optimized models. Among others, this involves the training logic for models, and the datasets used in training.

        https://huggingface.co/mistralai/Mistral-7B-v0.1/discussions/8

        Unfortunately we’re unable to share details about the training and the datasets (extracted from the open Web) due to the highly competitive nature of the field.

        The training data set is a vital part of the source code because without it, the rest of it is useless. The model is the compiled binary, the software itself.

        If you can’t share part of your source code due to the “highly competetive nature of the field” (or whatever other reason), your software is not open source.

        I cannot lool at Mistral’s source and see that, oh yes, it behaves this way because it was trained on this piece of data in particular - because I was not given accesa to this data.

        I cannot build Mistral from scratch, because I was not given a vital piece of the recipe.

        I cannot fork Mistral and create a competitor from it, because the devs specifically said they’re not providing the source because they don’t want me to.

        You can keep claiming that releasing the binary makes it open source, but that’s not going to make it correct.

        • chebra@mstdn.io
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          4 months ago

          @dandi8

          > The training data set is a vital part of the source code because without it, the rest of it is useless.

          This is simply false. Dataset is not the “source code” of a model. You need to delete this notion from your brain. Model is not the same as a compiled binary.

          • dandi8@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            4 months ago

            Gee, you sure put a lot of effort into supporting your argument in this comment.

            • chebra@mstdn.io
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              4 months ago

              @dandi8 But the proof is in your quote. Open source is a license which allows people to study the source code. The source code of a model is a bunch of float numbers, and you can study it as much as you want in Mixtral and others. Clearly a model can be published without the dataset (Mixtral), and also a model can be closed, hosted, unavailable for study (OpenAI). I think you need to find some argument showing how “source code” of a model = the dataset. It just isn’t so.

              • dandi8@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                That’s like saying the source code of a binary is a bunch of hexadecimal numbers. You can use a hex editor to look at the “source” of every binary but it’s not human readable

                Yes, the model can be published without the dataset - that makes it, by definition, freeware (free to distribute). It can even be free for commercial use. That doesn’t make it open source.

                At best, the tools to generate a model may be open source, but, by definition, the model itself can never be considered open-source unless the training data and the tools are both open-source.

                • chebra@mstdn.io
                  link
                  fedilink
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  4 months ago

                  @dandi8 surprise surprise, LLMs are not a classic compiled software, in case you haven’t noticed yet. You can’t just transfer the same notions between these two. That’s like wondering why quantum physics doesn’t work the same as agriculture.

                  Think of it as a database. If you have an open-source social network, all tools and code is published, free to use, but the value of the network is in the posts, the accounts, the people who keep coming back. The data in the database is not the source code

                  • dandi8@fedia.io
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 months ago

                    You’re trying to change the definition of open source for AI models and your argument is that they’re magic so different rules should apply.

                    No, they’re not fundamentally different from other software. Not by that much.

                    The training data is the source of knowledge for the AI model. The tools to train the model are the compiler for that AI model. What makes an AI model different from another is both the source of knowledge and the compiler of that knowledge.

                    AFAIK, only one of those things is open source for Mistral - the compiler of knowledge.

                    You can make an argument that tools to make Mistral models are open source. You cannot make an argument that the model Mistral Nemo is open source, as what makes it specifically that model is the compiler and the training data used, and one of those is unavailable.

                    Therefore, I can agree on the social network analogy if we’re talking about whether the tools to make Mistral models are open-source. I cannot agree if we’re talking about the models themselves, which is what everyone’s interested in when talking about AI.