Basic Income

May

Monday

8

Part Three: The New Gilded Age of AI, Fuck You Money, and a Post-Capitalist Society

Part Three:

This is a Three Part Series:

Copyright Law's Grey Area, Stochastic Parrots, Neuromorphic Computing Brain-Rape, the Path to Immortality and the Small Pushback to a Techno-Utopian Future from the AI Research and AI Ethics Community

_____________________________

Copyright Law's Grey Area

CopyRIGHT or CopyWRONG?

How can we ensure that Artists and Creatives Get Paid for their work Concepts and Ideas? How do we ensure that we don't end up in the trap of passing off artists work as our own? Where does Creativity begin and Stealing start? Is this a spectrum, if so what lies in the center?

Sarah Andersen: Sarah's Scribbles

Sarah Andersen is a popular cartoonist and illustrator, known for her distinctive drawings that focus on the absurdity of everyday life. Her webcomic Sarah's Scribbles has been published in multiple books and she offers art prints, apparel, and jewelry through her online store. Her work has been featured in publications such as The New Yorker, Buzzfeed, and TIME Magazine. In addition to her online comic, Sarah also designs online courses for aspiring cartoonists, giving them the tools they need to create their own comics. She has spoken at events all around the world and is an advocate for mental health awareness. Sarah's work transcends age and gender, resonating with fans all over the globe who find solace in her relatable, humorous cartoons. Her success proves that it's never too late to pursue your passion and achieve your dreams.  Sarah Andersen is truly an inspiration for all aspiring artists alike.

Sarah Andersen Lawsuit against MidJourney, Stability AI (Behind Stable Fusion) as well as Deviant Art.

Does making copies of works via training data for generative AI systems AI generated outputs infringe the copyright for derivative works?

Who owns copyright in the outputs of computer programs?

The copyright is bombarded with applications regarding AI generated works. Some policy decisions are to take center stage in 2024.

GitHub Microsoft Copilot Lawsuit

The lawsuits have alleged that the pair of companies (Microsoft and OpenAI) are violating copyright law. Specifically, they contend that by suggesting code and responding to programming prompts and codex, GitHub CoPilot unlawfully copies existing source code into its own software. Additionally, OpenAI is being sued for its large language model pirating content.


The legal challenge has sparked debate over the boundaries of copyright law in the digital age and many have argued that a model such as GPT-3 should not be considered unlawful, given its potential for innovation and capabilities to create new ideas. Nevertheless, the lawsuit against Github and OpenAI is still ongoing and if they are found to have violated copyright law, they could be liable for up to Billion in damages.


The lawsuit not only raises questions about the legality of AI-generated works but also highlights the need to protect developers whose content is used by these large language models. What's more, it highlights the importance of protecting intellectual property and ensuring that creators are properly compensated for their work; even if it is generated through AI technology.


Moreover, the case has prompted calls for stricter regulations on the use of AI in software development to ensure that developers and artists are adequately compensated for their work. With this case pending, it will be interesting to see how other companies respond and what actions they take in order to ensure that the proper legal standards are met. Ultimately, this case has serious implications on the future of AI software development and should serve as a reminder of the importance of copyright law protection and intellectual property rights.

The legal argument in the complaint regarding Co-Pilot is therefore twofold. Firstly, it states that any process of ingesting a copyrighted work from the internet or elsewhere is an infringement of copyright law and secondly, that the outputs produced by Co-Pilot are derivative works which also infringe copyright. To back up this claim, one study showed that Co-Pilot yielded an output which matched training data in approximately one percent of the outputs. This suggests that even though Co-Pilot does not write entire programs, its outputs may still constitute copyright infringement.


The injunction sought by the complainants is to shut down these systems and prevent any further use of them until a court has determined the extent of infringement, if any. The outcome of such a case will set an important precedent for the use of artificial intelligence and could have far-reaching implications for copyright law going forward. It is therefore essential that the parties involved in this dispute get a clear ruling from the court as soon as possible so that everyone can understand the legal ramifications of using such systems.


In the meantime, developers should be aware that there is potential copyright infringement risk involved in the use of Artificial Intelligence for generating code or other works and take steps to ensure that their projects are compliant with all applicable laws. This may include seeking advice from legal professionals, ensuring that any training data used by AI systems does not infringe any existing copyrights, and obtaining licenses from the copyright holders for any works that are generated by AI systems. Additionally, developers should ensure that their own code is not infringing on any other parties' copyrights, as this could lead to potential liability down the line.


Ultimately, it will be up to the court system to decide whether AI-generated works are considered to be infringing on existing copyrights. However, developers should still take steps to protect themselves from potential legal risks associated with the use of AI in generating code or other works. By doing so, they can ensure that their projects remain compliant and avoid any unnecessary legal disputes down the line.

Some Progress

We have seen progress on this already, one example if Firefly, Adobe's Generative AI product and Google today announced labeling AI generated images with a watermark to highlight whether an image is human generated or via AI. This though approach by Google and a similar approach by Open AI and others does not solve the root of the above problem. Adobe on the other hand is generating content using there one licensed imagery. Depending on the outcomes of these lawsuits will determine who is ahead from a business perspective.

Stability AI vs Getty Images Lawsuit

Stability. AI Get a is. Is the strongest tonight, it's Getty clients, that stability and Justin, 12 million photographs, from the ghetto images.
And not only that but the captions.

Getty is willing to license uses of images in its databases training data, but it's objects to stability's massive infringement and want to be paid for the imagery that Stability is scraping. In some cases you can visually see the Getty logo, smushed but clear enough to identify. It does not bode well for Stability in this case.

Evolution of Copyright in the US

The copyright office was very smal little thing over here that hardly anybody paid attention to And then software came along and they decided to use copyright for software and ever since then it's just been once scrambled after another and the office is more or lessetting industrial policy of the united states through some of their decisions and they don't have any expertise on this at all.

Authors Guild Vs Google 2016

The ruling by the Supreme Court in 2016 was an important milestone in the digital age, as it stated that Google’s use of copyrighted material for indexing and computational purposes was considered fair use. It acknowledged that there were other legitimate reasons to digitize content from library collections, such as serving up snippets and responsive search queries, which do not constitute infringement of a copyright.


The ruling also helped to clarify the boundaries between fair use and copyright infringement. It established that Google’s uses did not directly exploit the expression in the copyrighted work, but rather only provided limited access to it. This confirmed that the standard of fair use is flexible enough to accommodate digital content providers while still protecting copyright holders from exploitation.


This ruling has been hugely influential in digital age copyright law because it has given technology companies more legal certainty when it comes to using copyrighted material for indexing and computational purposes. Moreover, this case demonstrates how courts have been able to strike a balance between protecting copyright holders with fair use laws while allowing technology companies to innovate within certain parameters.  All in all, the 2016 Supreme Court case between Google and the Authors Guild effectively laid important precedents for how digital content providers can use copyrighted material for indexing, storage and other computational uses.  As technology continues to advance, this case will continue to be an important source of guidance in determining what is considered fair use and what constitutes infringement in the digital age.  Ultimately, it is confirmation that copyright law can keep up with technological advancements while still protecting authors rights.

The problem of copyright infringement arises when a new image is created which is similar to the original copyrighted image. In this case, not only does the copyright owner have their work used without permission, but they don't receive any compensation for it either. This has become an increasingly pressing issue in recent years as artificial intelligence (AI) systems are now able to create images which look very similar to the original copyrighted works. Not only is this an infringement of copyright, but it also raises questions about who should be compensated for the use of the image and whether or not the copyright owner should be able to opt out. While AI systems can generate new images that are similar to those that have been copyrighted in training data, copyright owners have no way of opting out or receiving compensation for their work. This has caused a lot of concern among creators and copyright owners, as they feel that it is only fair that they should be compensated for any images generated using their works. The question then becomes how to ensure that creators and copyright owners receive proper compensation while still allowing the development of AI systems.

It is clear that there needs to be some sort of system in place which can protect the rights of creators and copyright owners while still allowing progress to be made in artificial intelligence research. One possible solution may be to create a system where payment would automatically be collected when an AI system creates an image based on copyrighted works. This could provide a way for copyright holders to receive direct compensation for their work. Additionally, it could provide an incentive to create new images instead of using copyrighted material, as the developers would receive payment directly from the copyright holders. This could help to ensure that creators are fairly compensated while still allowing progress in AI research.


Another possible solution may be to incorporate some sort of opt-out system where copyright owners can choose to exclude their works from being used by AI systems. This way, they have a choice whether or not their images and other creative works are used for artificial intelligence research and development, ensuring that their rights are respected and properly compensated.


There is no one-size-fits-all solution when it comes to balancing the need for progress in artificial intelligence

TESCREAL

A paper written by AI Ethisists coined the Term "TESCREAL" which is short for:

“Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.”

The paper Co-authored by Timnit Gebru aims to connect these themes with leaders in the AI Community and find correlations that are relevant to how many in the tech community think and what drives there ambition. The paper ties these ideologies with that of eugenics, which on it's face can be alarming.

Transhumanism seeks to transcend mortality and enhance physical and cognitive capacities; extropianism seeks to overcome entropy by continually increasing orderliness in our environment and ourselves;

Singularitarianism aims to surpass limitations posed by current laws of physics; cosmism seeks to transcend the boundaries of physical existence;

Rationalism advocates for a critical approach and structured analysis over intuitive reasoning or emotion;

Effective Altruism promotes intentional action to benefit others in an effective manner. Finally, Longtermism is concerned with protecting the interests of future generations, who are expected to face increasingly complex global challenges due to climate change, population growth, and technological advancement.

Third, these ideologies often overlap and intersect with each other.

Transhumanists may also be Extropians, Singularitarians may be Cosmists, Rationalists can embrace Effective Altruism principles, and so on. This creates opportunities for collaboration and synergy between them that could potentially lead to more effective solutions than any single ideology could achieve on its own.

Critics of the paper say that the corellations are too extreme and are not specific enough but I suggest all readers to take a look at the paper and determine what they find contention in the academic outline that Timnit and the paper outlines.

Below is a video that Timnit shared on "Eugenics and the Promise of Utopia through Artificial General Intelligence.

The Term Catching on

Recently the term has caught on, even Marc Andreessen, one of the most influential tech VC's has used this to describe himself, even though this maybe in jest, has called himself a "TESCREAList" highlighted in his most recent Twitter bio.

For the past few months the bigger media outlets have barely covered this topic but as AI and the concept of AGI becomes more of a focus in daily conversation it is crucial to hear from everyone, especially those who have built and researched the technology extensively. Just in the past week or so we have seen larger news outlets mention this term.

Timnit Gebru is a prominent AI researcher and advocate for diversity in technology. She worked at Google where she developed AI and discussed

Gebru and her team had been working on a research paper exploring the potential dangers of large language models, and Google executives asked for it to be withdrawn. Gebru refused, arguing that the company was trying to stifle her academic freedom. After several email exchanges with higher management, Gebru was let go from Google.

Gebru's departure sparked widespread outrage and debate online. Google CEO Sundar Pichai responded by promising an investigation into her departure.

In the months following Gebru's departure, Google issued a formal apology to her and launched independent investigations into how she was treated.

Gebru has also become a prominent voice in the conversation around ethical considerations of AI, and she continues to emphasize the importance of diversity and inclusion in tech. She has been an advocate for stronger regulations that would protect consumers from bias and discrimination resulting from algorithms, as well as greater accountability for companies developing AI technologies. She is also critical of approaches of how AI companies higher low wage workers to prescan images and artifacts generated from AI to help teach the algorithms to better more accurate outcomes.

Gebru's research aims to address some of the most pressing ethical issues facing our increasingly automated world. She has argued that it is both possible and necessary to develop algorithms that make decisions with fairness, accountability, and transparency – rather than allowing these systems to perpetuate existing forms of discrimination or inequality. Through her work, Gebru hopes to inspire further conversations about responsible AI.

Émile P. Torres the other Co-author on this work is a leading philosopher and historian who has written extensively on existential threats to civilization and humanity. They have argued that humanity needs to be aware of the dangers posed by artificial intelligence, biotechnology, nanotechnology, and other emerging technologies given their potential to dramatically alter the current status quo. They further contend that religious belief systems must be re-examined in light of the rapidly changing technological landscape. Torres' work is highly respected by academics, making them one of the foremost authorities on existential risks to civilization. Their thought-provoking analysis has made a major contribution to contemporary discourse on these topics.

Émile recently wrote the article about Nick Bostrom, a leading philosopher on superintelligence and existential risk - (and who has stated some very questionable and racist views) in Truthdig here

Stochastic Parrots

The paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" was one of the first to sound the alarm on the potential misuse of large language models (LLMs) such as GPT-4. It warned that LLMs lack a “conceptual understanding” and thus can be used for malicious purposes, such as creating fake news or generating offensive content. It also argued that there should be an ethical framework in place to regulate their development and use.

“system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”

Despite being published two years ago, the authors’ warnings still remain relevant today. For instance, GPT-4 — a powerful LLM developed by OpenAI — is now being used for a wide range of tasks, from writing news articles to generating research papers. This has raised a number of ethical questions, such as whether these LLMs should be regulated to ensure that their outputs are accurate and not used for malicious purposes.


In light of this, it’s clear that the authors’ paper was ahead of its time in terms of anticipating the challenges posed by LLMs and the need for ethical guidelines to regulate their use. As we move forward, it’s essential that these ethical issues remain at the forefront of discussion and guide the development and use of LLMs.


The paper written by Timnit Gebru and Margaret Mitchell proves its relevance in today's digital world as it sheds light on the potential misuse of LLMs. Their warnings have proven to be prescient, and their insights into the need for ethical frameworks still stands as a key component of understanding the implications of language models today.  Overall, their paper provides an important reminder that with great power comes great responsibility — and that this responsibility should be taken seriously by those developing and using LLMs.

Neuromorphic Computing, Brain-Rape and the Path to Immortality

Neuralink

Our brain is an incredibly complex organ. It contains 86 billion neurons, connected by 3 million kilometers of nerve fibers, that together form an extraordinarily powerful network. This network enables us to think, reason, remember and feel emotions. Our neurons are constantly communicating with each other through electrical signals, forming the basis for how we learn and experience life. With so many neurons and pathways, the possibilities of what our brains can do are almost endless. From creating ideas and solving puzzles to experiencing joy and suffering, our brains are responsible for virtually all aspects of our lives. Understanding how this incredible network works is the key to unlocking its full potential. By studying the anatomy of the brain and developing an understanding of its processes, we can begin to uncover the secrets behind its power and use it to improve our lives in countless ways. With each new discovery, we get a small step closer towards understanding this fascinating organ that makes us who we are.

With the advent of brain interfaces such as Neuralink, we are entering a new era in which our thoughts can be read and understood by machines. Such interfaces work by connecting a physical device to the brain via electrodes, allowing for direct communication between neurons and technology. This opens up exciting possibilities for medical applications, such as restoring movement or vision to those with spinal or brain injuries, as well as enabling the direct integration of artificial intelligence into our daily lives.

However, there are also ethical concerns associated with such technologies. For example, would it be possible for someone to use a brain interface to steal thoughts and ideas from another person? Could AI systems gain access to sensitive information about an individual that should remain private? These are questions that need to be addressed in order for society to reap the full benefits of these new technologies without compromising our privacy and security.

On one hand, having powerful tech leaders at the forefront of this type of cutting-edge technology can help accelerate development and ensure ethical standards are upheld. On the other hand, many people may feel uncomfortable ceding control to a few influential individuals. There must also be regulation to ensure that such technology is not abused or used for malicious purposes.

Ultimately, it is important for us to keep an open dialogue about the implications of neuralink and other brain hacking interfaces. We must acknowledge the potential benefits, but also take into account any possible risks and ethical considerations before allowing this powerful technology to become widespread.

When does Animal Rights meet Human Rights

The debate between animal rights and human rights has been a long-standing one. How much of an animal's rights be respected in comparison to the rights of humans This debate frequently arises when it to the use of animals in, particularly with regards to Artificial Intelligence (AI). Animal testing is often used in the development of AI, raising questions about whether or not such testing is ethical.

The development of Neuralink technology has added another layer to this debate. Neuralink is a device that could potentially be implanted into animals - including monkeys - in order to measure brain activity and further develop AI. This brings with it the same kind of moral dilemmas as those surrounding animal testing.

In order to move forward with any kind of technology, be it AI or Neuralink, it is important to consider both human and animal rights. We must ask ourselves whether the potential benefits of development outweigh the suffering that would occur in testing phases - and ultimately, whether animal experimentation should ever really be considered as acceptable.