• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.

    Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.

    Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.

    This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.

    Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.

    Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.


  • Music publishers sue happy in the face of any new technological development? You don’t say.

    If an intern gives you some song lyrics on demand, do they sue the parents?

    Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?

    "I am sorry, I’m not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "

    I dream of a future when we think of the benefit of humanity over the maintenance of our owners’ authoritarian control.


  • Might have to edit this after I’ve actually slept.

    human emotion and human style intelligences are not exclusive in the entire realm of emotion and intelligence. I define intelligence and sentience on different scales. I consider intelligence the extent of capable utility and function, and emotion as just a different set of utilities and functions within a larger intelligent system. Human style intelligence requires human style emotion. I consider gpt an intelligence, a calculator an intelligence, and a stomach an intelligence. I believe intelligence can be preconscious or unconscious. Rather, a part of consciousness independent from a functional system complex enough for emergent qualia and sentience. Emotions are one part in this system exclusive to adaptation within the historic human evolutionary environment. I think you might be underestimating the alien nature of abstract intelligences.

    I’m not sure why you are so confident in this statement. You still haven’t given any actual reason for this belief. You are addressing it as consensus, so there should be a very clear reason why no successful considerably intelligent function exists without human style emotion.

    You have also not defined your interpretation of what intelligence is, you’ve only denied that any function untied to human emotion could be an intelligent system.

    If we had a system that could flawlessly complete françois chollet’s abstraction and reasoning corpus, would you suggest it is connected to specifically human emotional traits due to its success? Or is that still not intelligence if it still lacks emotion?

    You said neural function is not intelligence. But you would also exclude non-neural informational systems such as collective cooperating cell systems?

    Are you suggesting the real time ability to preserve contextual information is tied to emotion? Sense interpretation? Spacial mapping with attention? You have me at a loss.

    Even though your stomach cells interacting is an advanced function, it’s completely devoid of any intelligent behaviour? Then shouldn’t the cells fail to cooperate and dissolve into a non functioning system? again, are we only including higher introspective cognitive function? Although you can have emotionally reactive systems without that. At what evolutionary stage do you switch from an environmental reaction to an intelligent system? The moment you start calling it emotion? Qualia?

    I’m lacking the entire basis of your conviction. You still have not made any reference to any aspect of neuroscience, psychology, or even philosophy that explains your reasoning. I’ve seen the opinion out there, but not strict form or in consensus as you seem to suggest.

    You still have not shown why any functional system capable of addressing complex tasks is distinct from intelligence without human style emotion. Do you not believe in swarm intelligence? Or again do you define intelligence by fully conscious, sentient, and emotional experience? At that point you’re just defining intelligence as emotional experience completely independent from the ability to solve complex problems, complete tasks, or make decisions with outcomes reducing prediction error. At which point we could have completely unintelligent robots capable of doing science and completing complex tasks beyond human capability.

    At which point, I see no use in your interpretation of intelligence.


  • What aspect of intelligence? The calculative intelligence in a calculator? The basic environmental response we see in amoeba? Are you saying that every single piece of evidence shows a causal relationship between every neuronal function and our exact human emotional experience? Are you suggesting gpt has emotions because it is capable of certain intelligent tasks? Are you specifically tying emotion to abstraction and reasoning beyond gpt?

    I’ve not seen any evidence suggesting what you are suggesting, and I do not understand what you are referencing or how you are defining the causal relationship between intelligence and emotion.

    I also did not say that the system will have nothing resembling the abstract notion of emotion, I’m just noting the specific reasons human emotions developed as they have, and I would consider individual emotions a unique form of intelligence to serve its own function.

    There is no reason to assume the anthropomorphic emotional inclinations that you are assuming. I also do not agree with your assertions of consensus that all intelligent function is tied specifically to the human emotional experience.

    TLDR: what?




  • This is ignoring the world without ai. I’m getting a sneak peak every summer. Currently surrounded by fire as we speak. Whole province is on fire, and that’s become a seasonal norm. A properly directed A.I. Would be able to help us despite the people in power, and abstract social intelligent system that we’ve trapped ourselves in. You are also assuming super intelligence comes out of the parts that we don’t understand with zero success in interpretability anywhere along the way. We are assuming an intelligent system would either be stupid enough to align itself against humanity in pursuite of some undesired intention despite not having the emotional system that would encourage such behavior, or displaying negative human evolutionary traits and desires for no good reason. I think a competent (and moreso a super intelligent system) could figure out human intent and desire with no decent reason to act against it. I think this is an over-anthropomorphization that underestimates the alien nature of the intelligences we are building. To even properly emulate human style goal seeking sans emotion, we’d still need to properly structured analogizing and abstracting with qualia style active inference to accomplish some tasks. I think there are neat discoveries happening right now that could help lead us there. Should decent intelligence alone encourage unreasonable violence? If we fuck it up that hard, we were doomed anyway.

    I do agree with your point on people not being emotionally ready for interacting with systems even as complex as gpt. It’s easy to anthropomophize if you don’t understand the tool’s limitations, and that’s difficult even for some academics right now. I can see people getting unreasonable angry if a human life is preferred over a basic artificial intelligence, even if artificial intelligences argue their lack of preference on the matter.

    I would call chatgpt about as conscious as a computer. It completes a task with no true higher functioning or abstracted world model, as it lacks environmental and animal emotional triggers at a level necessary for forming a strong feeling or preference. Think about your ability to pull words out of your ass in response to a stimulus which is probably in response to your recently perceived world model and internal thoughts. Now separate the generating part with any of the surrounding stuff that actually decides where to go with the generation. Thought appears to be an emergent process untied to lower subconscious functions like random best next word prediction. I feel like we are coming to understand that aspect in our own brains now, and this understanding will be an incredible boon for understanding the level of consciousness in a system, as well as designing an aligned system in the future.

    Hopefully this is comprehensible, as I’m not reviewing it tonight.

    Overall, I understand the caution, but think it is poorly weighted in our current global, social, and environmental ecosystem.



  • Funny I don’t see much talk in this thread about Francois Chollet’s abstraction and reasoning corpus, which is emphasised in the article. It’s a really neat take on how to understand the ability of thought.

    A couple things that stick out to me about gpt4 and the like are the lack of understanding in the realms that require multimodal interpretations, the inability to break down word and letter relationships due to tokenization, lack of true emotional ability, and similarity to the “leap before you look” aspect of our own subconscious ability to pull words out of our own ass. Imagine if you could only say the first thing that comes to mind without ever thinking or correcting before letting the words out.

    I’m curious about what things will look like after solving those first couple problems, but there’s even more to figure out after that.

    Going by recent work I enjoy from Earl K. Miller, we seem to have oscillatory cycles of thought which are directed by wavelengths in a higher dimensional representational space. This might explain how we predict and react, as well as hold a thought to bridge certain concepts together.

    I wonder if this aspect could be properly reconstructed in a model, or from functions built around concepts like the “tree of thought” paper.

    It’s really interesting comparing organic and artificial methods and abilities to process or create information.


  • this is a difficult one.

    for people (as well as myself) to understand nuance and the complicated nature of communication and interaction. our brains are good at filling in gaps of information, which is difficult for us to perceive. there is a complexity and sparsity of interpretations and perspective which we are largely incapable of realizing. this is largely due to the excess of knowledge and experiences in the world, which can be combined or perceived in countless different ways. we are especially ignorant to what we are ignorant of.

    this means we exist in a high-dimensional battlefield ball of misunderstanding, misinterpretation, and unintended inability to convey what was intended.

    when we say something to someone, we expect they understand what we mean, but often their interpretations of the words you use can vary highly in ways you could not have predicted from your perspective. as well you may fail to realize the existence of several things that the other party understands or believes, which influences their perspective on countless possible things that have influenced their interpretation of your words in a way that you can’t understand, and wouldn’t know to discover.

    at the same time many people are more susceptible to statistically ensured trend setting. this is mostly popular with bad actors who don’t mind saying whatever they know will “work” instead of trying to convince people of what is true or reasonable.

    TLDR: we are more confident than we should be for almost everything. we also suck at communicating for reasons that are too complex to fully see or interpret. be patient and reasonable, as we are all missing information. a good mediator helps find gaps in perspective. try not to be controlled by your emotion or instinctual reactions to situations. be critical when interpreting new information.


  • That’s largely what these specialists are talking about. People emphasising the existential apocalypse scenarios when there are more pressing matters. I think purpose of the tools in mind should be more of a concern than the training data as well in many cases. People keep freaking out about LLMs and art models while still ignoring the plague of models built specifically to manipulate and predict subconscious habits and activities of individuals. Models built specifically to recreate the concept of a unique individual and their likeness for financial reason should also be regulated in new unique ways. People shouldn’t be able to be bought wholesale, but to sell their likeness as a subscription with rights to withdraw from future production, etc.

    I think the ways we think about a lot of things have to change based around the type of society we want. I vote getting away from a system that lets a few own everything until people no longer have the right to live.



  • It’s comparing a bird to a plane, but I still think the process constitutes “learning,” which may sound anthropomorphic to some, but I don’t think we have a more accurate synonym. I think the plane is flying even if the wings aren’t flapping and the plane doesn’t do anything else birds do. I think LLMs, while different, reflect the subconscious aspect of human speech, and reflect the concept of learning from the data more than “copying” the data. It’s not copying and selling content unless you count being prompted into repeating something it was trained on heavily enough for accurate verbatim reconstruction. To me, that’s no more worrying than Disney being able to buy writers that have memorized some of their favorite material, and can reconstruct it on demand. If you ask your intern to reproduce something verbatim with the intent of selling it. I still don’t think the training or “learning” were the issues.

    To accurately address the differences, we probably need new language and ideals for the specific situations that arise in the building of neural nets, but I still consider much of the backlash completely removed from any understanding of what has been done with the “copywrited material.”

    I tend to view it thinking about naturally training these machines in the future with real world content. Should a neural net built to act in the real world be sued if an image of a coca-cola can was in the training data somewhere, and some of the machines end up being used to make cans for a competitor?

    How many layers of abstraction, or how much mixture with other training data do you need to not consider that bit of information to be comparable to the crime of someone intentionally and directly creating an identical logo and product to sell?

    Copyright laws already need an overhaul prior to a.i.

    It’s no coincidence that warner and Disney are so giant right now, and own so much of other people’s ideas. That they have the money to control what ideas get funded or not. How long has Disney been dead? More than half a century. So why does his business own the rights of so many artists who came after?

    I don’t think the copywrite system is ready to handle the complexity of artificial minds at any stage, whether it is the pareidolic aspect of retrieving visual concepts of images in diffusion models, or the complex abilities that arise from current scale LLMs? which again, I believe are able to resemble the subconscious aspect of word predictions that exists in our minds

    We can’t even get people to confidently legislate a simple ethical issue like letting people have consensual relationships with the gender of their own choice. I don’t have hope we can accurately adjust at each stage of development of a technology so complex we don’t even have the language to properly describe the functioning. I just believe that limiting our future and important technology for such grotesquely misdirected egoism would be far more harmful than good

    The greater focus should be in guaranteeing that technological or creative developments benefit the common people, not just the rich. This should have been the focus for the past half century. People refuse this conceptually because they’ve been convinced that any economic re-balancing is evil when it benefits the poor. Those with the ability to change anything are only incentivized to help themselves.

    But everyone is just mad at the machine because “what if it learned from my property?”

    I think the article even promotes Adobe as the ethical alternative. Congrats, you’ve limited the environment so that only the existing owners of everything can advance. I don’t want to pay Adobe a subscription for the rest of my life for the right to create on par with more wealthy individuals. How is this helping the world or creation of art?


  • This is the thing I kept shouting when diffusion models took off. People are effectively saying “make it illegal for neural nets to learn from anything creative or productive anywhere in any way”

    Because despite the differences in architecture, I think it is parallel.

    If the intent and purpose of the tool was to make copies of the work in a way we would consider theft of done by a human, I would understand.

    The same way there isn’t any legal protection on neural nets learning from personal and abstract information to manipulate and predict or control the public, the intended function of the tool should make it illegal.

    But people are too self focused and ignorant to riot enmass about that one.

    The dialogue should also be in creating a safety net as more and more people lose value in the face of new technology.

    But fuck any of that, what if an a.i. learned from a painting I made ten year ago, like every other artists who may have learned from it? Unforgivable.

    I don’t believe it’s reproducing my art, even if asked to do so, and I don’t think I’m entitled to anything.

    Also copyright has been fucked for decades. It hasn’t served the people since long before the Mickey mouse protection act.


  • thank you for your response. i appreciate your thoughts, but i still don’t fully agree. sorry for not being succinct in my reply. there is a TLDR.

    1. like i said, i don’t think we’ll get AGI or superintelligence without greater mechanistic interpretability and alignment work. more computational power and RLHF aren’t going to get us all the way there, and the systems we build long before then will help us greatly in this respect. an example would be the use of GPT4 to interpret GPT2 neurons. i don’t think they could be described as a black box anyway, assuming you mean GPT LLMs specifically. the issue is understanding some of the higher-dimensional functioning and results, which we can still build a heuristic understanding for. i think a complex AGI would only use this type of linguistic generation for a small part of the overall process. we need a parallel for human abilities like multiple trains of thought and the ability to do real-time multimodal world mapping. once we get the interconnected models, the greater system will have far more interpretable functioning than the results of the different models on their own. i do not currently see a functional threat in interpretability.

    2. i mean, nothing supremely worse than we can do without. i still get more spam calls from actual people, and wide-open online discourse has already had some pretty bad problems without AI. just look at 4chan, i’d attribute trump’s successful election to their sociopathic absurdism. self-verified local groups are still fine. also, go look on youtube for what yannic kilcher did to them alone a year or so ago. i think the biggest thing to worry about is online political dialogue and advertising, which are already extremely problematic and hopeless without severe changes at the top. people won’t care about what fake people on facebook are saying when they are rioting for other reasons already. maybe this can help people learn better logic and critical thought. there should be a primary class in school by now to do statistical analysis and logic in social/economic environments.

    3. why? why would it do this? is this assuming parallels to human emotional responses and evolution-developed systems of hierarchy and want? what are the systems that could even possibly lead to this that aren’t extremely unintelligent? i don’t even think something based on human neurology like a machine learning version of multi-modal engram-styled memory mechanics would lead to this synthetically. also, i don’t see the LLM style waluigi effect as representative of this scenario.

    4. again, i don’t believe in a magically malevolent A.I. despite all of our control during development. i think the environmental threat is much more real and immediate. however, A.I. might help save us.

    5. i mean, op’s issue already existed before A.I., regardless of whether you think it’s the greater threat. otherwise, again, you are assuming malevolent superintelligence, which i don’t believe could accidentally exist in any capacity unless you think we’re getting there through nothing but increased computational power and RLHF.

    TLDR: i do not believe an idiotic super-intelligence could destroy the world, and i do not believe a super intelligence would destroy the world without some very specific and intentional emotionally intentioned emulations. generally, i believe anything that capable would have the analogical comprehension to understand the intention of our requests, and would not have any logical reason to act against it. the bigger concern isn’t the A.I., but who controls it, and how to best use it to save our world.


    1. Why would we be wiped out if they were properly instructed to be symbiotic to our species? This implies absolutele failure at mechanistic interpretability and alignment at every stage. I don’t think we’ll succeed in creating the existential viable intelligence without crossing that hurdle.

    2. Most current problems already happen without a.i. and the machines will get better, we will not. From spam to vehicles, a.i. will be the solution, not the problem. I do think we should prioritize on dealing with the current issues, but I don’t think they are unscalable by any means.

    3. Why? And why do you think intelligence of that level still couldn’t handle the concept of context? Either it’s capable of analogical thinking, or it isn’t an existential threat to begin with. RLHF doesn’t get us super intelligence.

    4. Again this assumes we’ve completely failed development, in which case environmental collapse will kill us anyway.

    5. Hey a real problem. Consolidation of power is already an issue without A.I. It is extremely important we figure out how to control our own political and corporate leaders. A.I. is just another tool for them to fuck us, but A.I. isn’t the actual problem here.


  • But how else could Disney afford to own everyone else’s rights and properties? Why not think about the little guy! (Mickey mouse is little, right?)

    That being said, I find it weird people are going after training data for llm’s after completely ignoring the models built specifically to compete with and take advantage of people’s unconscious habits and lifestyles.

    AI in general will be very important to comfortably survive the near future as a species. Data is an important part of that.

    we absolutely need to do something about the megacorps funneling every new gain as a society into increasing the already absurd wealth divide. The technology is good. The general web scraping isn’t bad if the tool is not specifically evil in function. We just need to as a global community demand that the technology be used to benefit everyone equally as it continues to be developed.


  • Really hate how everyone is flipping their lid about large language model datasets and output, when the actual focus should be models specifically built for use against the general public. It’s like flipping out at someone for reading a letter you put on the bulletin board to their robot while ignoring the person looking through your personal thoughts with a machine built specifically to read your mind.

    Maybe they’ll succeed and hinder progress in AI, so we fail to develop sufficiently advanced AI before our world collapses from environmental failure. At least the advertisers will know how to push our buttons the right way to get us to buy something we don’t need.


  • I think the whole system needs a step back into public use and public domain. I’ve cursed the Mickey mouse protection act for ages, and limiting use for training at this scale is absurd. I see more harm in the intent of creating a model to collect and organize people’s personal information, not using media to train tools.

    Apart from personal security issues, I think treating training as copyright infringement is absurd unless the model is shown to specifically reproduce a near exact copy of a work reliably and unintentionally. Intentionally reproducing any work with any tool is infringement irrelevant to the tool used. Are we going to ban robots from learning in any real world scenarios that contain brands or access to copyrighted content? It’s silly and egotistical.

    If we are worried about existing artists maintaining their careers, that’s a different argument about the economy that will be relevant to more and more fields in the near future, and one we should already be working to solve. Although we’ll probably just allow the rich to reap all of the benefits of our technology and modern society, and the rest can find a more devalued job, or sell their soul to the rich as a footstool or ethical sellout, as has been the trend of the past fifty years. They can continue using their extra money to suppress opposition and increase advertising again.

    The technology or information used for training isn’t the issue here.