Artificial Intelligence & Creativity: A Manifesto for Collaboration

Artificial Intelligence & Creativity: A Manifesto for Collaboration

Technology July 13, 2023 / By Florent Vinchon, Todd Lubart, Sabrina Bartolotta, Valentin Gironnay, Marion Botella, Samira Bourgeois-Bougrine, Jean-Marie Burkhardt, Nathalie Bonnardel, Giovanni Emanuele Corazza, Vlad Glăveanu, Michael Hanchett Hanson, Zorana Ivcevic, Maciej Karwowski, James C. Kaufman, Takeshi Okada, Roni Reiter-Palmon, Andrea Gaggioli
Artificial Intelligence & Creativity: A Manifesto for Collaboration
SYNOPSIS

This manifesto, written by a team of creativity researchers, aims to promote a collaborative vision of creativity involving artificial intelligence (AI) and humans.

ABSTRACT

With the advent of artificial intelligence (AI), the field of creativity faces new opportunities and challenges. This manifesto explores several scenarios of human–machine collaboration on creative tasks and proposes “fundamental laws of generative AI” to reinforce the responsible and ethical use of AI in the creativity field. Four scenarios are proposed and discussed: “Co-Cre-AI-tion,” “Organic,” “Plagiarism 3.0,” and “Shut down,” each illustrating different possible futures based on the collaboration between humans and machines. In addition, we have incorporated an AI-generated manifesto that also highlights important themes, ranging from accessibility and ethics to cultural sensitivity. The fundamental laws proposed aim to prevent AIs from generating harmful content and competing directly with humans. Creating labels and laws are also highlighted to ensure responsible use of AIs. The positive future of creativity and AI lies in a harmonious collaboration that can benefit everyone, potentially leading to a new level of creative productivity respecting ethical considerations and human values during the creative process.


This manifesto, written by a team of creativity researchers, aims to promote a collaborative vision of creativity involving artificial intelligence (AI) and humans. In this context, although most of this article has been written by researchers, some contributions were provided by generative AI, and these are directly acknowledged following the latest recommendations from various publishers, ethics committees, and journals (COPE, 2023; Elsevier, 2023; Flanagin, Bibbins-Domingo, Berkwits, & Christiansen, 2023; Nature, 2023; Zielinski et al., 2023). In this manifesto, we are specifically concerned with narrow (or limited) AI applications. Artificial general intelligence (AGI) is out of our scope, but it remains a possible future extension of this work.

Creativity is a phenomenon that is not unique to the human species, as it can be considered to be the guiding force in the evolution of our entire universe (Corazza, 2019; Corazza & Lubart, 2020). In the dynamic universal creativity process, a potential for creative emergence can be recognized in four layers: material, biological, socio-cultural, and artificial. Creativity episodes occurring at different layers of complexity are possibly concatenated: this manifesto is specifically concerned with the interconnections between human-centered socio-cultural creativity and machine-centered artificial creativity.

Interest in the field of AI has recently grown with the arrival of new and already popular tools like ChatGPT1 and MidJourney2 and even newer products steadily moving onto the market, such as Microsoft's Bing3 and Google's Bard.4 However, the reflections surrounding the advent of AI are much older, going back to the ancient Greeks. Whether it is Talos (giant guardian of the Crete Island animated by Hephaistos), Golem, or automatons, non-human “agents” led humans to question themselves about their nature and conscience (Kantosalo, Falk, & Jordanous, 2021). More recently, during the first attempts to create computers, Ada Lovelace, a pioneer of computer coding, nearly 100 years ago, proposed that a machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent” (in Howard, 2019). Following Lovelace's idea, we can see that different “Generative” artificial intelligence (GAI) tools illustrate that she was indeed right in foreseeing programs that generate music “from scratch” (e.g., Beatoven.ai5; Huang et al., 2022), complete unfinished symphonies (Elgammal, 2021), imitate voices (Wang et al., 2023), and even generate voices with tools like Murf.ai.6

At a conference in 1956, John McCarthy proposed the term AI to describe machines capable of reasoning, such as the “Logic Theorist” which was presented at that event as one of the first AIs. Wang (2019) describes AI as a set of information processing systems that can adapt to their environment while using the resources and knowledge available. AI systems can be differentiated from other computerized systems because AIs have fluid adaptation capabilities, sometimes going beyond what was initially programmed.

The arrival of GAI has recently changed the public's interest in AI. GAI can be defined as a system that uses existing information to create new content (Muller, Chilton, Kantosalo, Martin, & Walsh, 2022; Sbai, Elhoseiny, Bordes, LeCun, & Couprie, 2019). Being able to create content from a user's request is impressive and questions certain professional practices related to “creative” jobs that were previously considered untouchable.

On March 2, 2023, an article entitled “AI to replace journalists at Springer” was published in the Figaro. As per the report, two German newspapers—Bild and Die Welt—are planning to introduce AI agents to replace some of their layout, proofreading, and administration staff (Philippot, 2023). This move is an example of “creative destruction,” a concept introduced by the economist Joseph Schumpeter, where innovation creates value by replacing outdated systems, such as human labor (see McCraw, 2009). But is it really the case? Should we fear that AI will replace humans' work? If this is highly plausible for certain professions (as we have been able to observe since the second industrial revolution), perhaps this will not concern higher-level jobs, including those that utilize creative thinking. The relationship between AI and the creative process in the workplace has been recently investigated.

Noy and Zhang (2023), for example, showed that college-educated professionals (grant writers, consultants, data analysts, human resource professionals, and marketers) who had to produce different types of documents related to their jobs were speedier and showed better quality when they were using ChatGPT. The quality of their writing, assessed by blinded professionals, was higher (+0.45 SD compared to the control group), and speed increased by 34% (10 minutes less than the control group at 27 minutes). Results also showed that, compared to the control group, participants were more worried about the automation increases, more excited by AI, and more optimistic about the impact of future advances. A second study of 95 professional computer programmers investigated the role of AI in supporting developers in writing an HTTP server in JavaScript (Peng, Kalliamvakou, Cihon, & Demirer, 2023). Whereas the control group was able to use the web and check Stack Overflow or other well-known platforms to complete the task, the experimental group had access to GitHub Copilot (an AI created by OpenAI-like ChatGPT applied to the specific domain “Codex”). The group using AI was 55.8% faster (71.17 minutes instead of 160.89 minutes), but with program quality being non-significantly higher in the AI condition. Finally, Felten, Raj, and Seamans (2023) examined what kind of occupations and which industries will be most affected by ChatGPT and other LLM (Large Language Models, a type of GAI built using deep learning to generate and produce relevant content; Lee, 2023). Using the AIOE Methodology (AI Occupational Exposure; Felten, 2021), AI applications were linked to human abilities based on data from the US Department of Labor. Results suggest that certain professions (such as telemarketers) or industries (such as legal services, financial investments, and insurance) will be particularly vulnerable. That said, it is important to note that current LLMs such as ChatGPT may not be as useful. For example, instances of plagiarism, providing incorrect information, and even creating fake references have been documented.

These articles are just a small part of the growing literature on AI and the changes that are coming to our jobs, and some researchers (Xu et al., 2018) and philosophers (Floridi, 2014) even speak of a fourth industrial revolution, because the impact of these new technologies will be so important for all professions. We cannot deny AI's ability to engage in content creation, but is generative AI truly creative?

When GAI used for art-like DALL-E 2 by OpenAI and MidJourney were prompted with the input “Artificial Intelligence and Creativity collaborating,” we get the images displayed in Figures 1 and 2 (March 15, 2023).

Figure 1
Figure 1: “Artificial Intelligence and Creativity collaborating” created with Dall-E software


Figure 2
Figure 2 “Artificial Intelligence and Creativity collaborating” created with MidJourney software.


“Generative” artificial intelligence systems create appropriate content that is “new” in that these systems build on pre-existing images available in their databases. In this, the AI is not unlike most, if not all, human creativity. People also build on, and continuously reconfigure, their growing knowledge and experience (Gruber, 1981; Hanchett Hanson et al., 2021; Weisberg, 2006, 2011). Like everyday and some workplace creativity by humans, the novelty of AI-generated work has tended to be limited. In the case of content creation tasks, such as proposing a text describing a soccer game, which has an established pattern, a GAI system in 2023 can handle it with ease. As a first pass, these systems deserve recognition. Their productions are conceptually similar to what we may expect from a large number of human participants.

Current AI systems do not hold up so well against more advanced creative work, however. Their databases are vast but conventionally structured. In comparison, most humans draw on much less information (e.g., previous works, examples from other artists, creative outputs from colleagues, etc.) in their creative work, but one's own experience can lead to a personal elaboration which, in some cases, can lead to disruptive results. In addition, people often spend decades or a lifetime searching, practising, elaborating, testing, and refining their points of view, styles, mediums, etc.—work driven by extraordinary human motivation and curiosity. When interacting with humans, AI becomes an influential (f)actor in the creative process with possible consequences (seldom examined) on the generated outcomes, the individuals participating in the creative task, but also at the societal level. “Creative intentionality,” the desire and drive of an individual to want to create something, remains an element that is, for the moment, only present in humans. However, there is debate about the notion of intentionality related to GAI and whether intentionality is required for work to be recognized as creative (Varshney, 2021, but see Karwowski & Beghetto, 2019).

Consideration of motivation and curiosity point to the fact that the creative process does not begin or end with the generation of ideas. Creativity scholars have long identified other crucial aspects of the creative process. Fundamental to creativity is problem finding—exploring, identifying, and defining the objects of creativity (what should be asked and addressed; Csikszentmihalyi, 1988; Reiter-Palmon & Robinson, 2009) and estimating the potential originality and effectiveness of the generated ideas (Corazza & Agnoli, 2022). Both problem finding and estimation of creativity require self-regulation and social co-regulation through which initial ideas and drafts of potential outputs are developed and refined (Efklides, 2008; Ivcevic & Nusbaum, 2017). Self-regulation and co-regulation add new ideas to the initial ones, revise based on reflection and feedback, and opt for more or less unconventional approaches, all of which is unique to humans, and all of which contributes to overall creativity (Zielińska, Forthmann, Lebuda, & Karwowski, 2023; Zielińska, Lebuda, Ivcevic, & Karwowski, 2022). Therefore, as long as the human species does not succeed (if it ever should) in endowing machines with conscious appreciation of reality and of imagined futures, artificial creativity will be limited to covering only a part of the creative process. The aforementioned results produced by GAI are only valuable and “exciting” because we, humans, attribute that value and feel that emotion. Present-day GAI is unaware of both value and related emotions.

The goal of this manifesto is to suggest that human–AI collaboration can be extraordinarily productive, while recognizing that vigilance will be necessary in regard to both easily foreseen and unanticipated problems. Today's AI systems excel at producing a large number of proposals in record time. We can obtain a list of 20–40 ideas, a text on a specific topic, a poem, or a personalized computer image with just a few clicks. In this collaboration, the human keeps a central role at two key moments of the creative process: at the beginning and at the end. Indeed, at the beginning, the human must engage in problem finding—identifying and exploring questions and fine-tuning them to get a desired output (Glaveanu et al., 2013). Thus, AI would assist in the problem development phase of a creative process (Csikszentmihalyi, 1988; Mace & Ward, 2002). Additionally, as mentioned before, the human takes on the role of the estimator who evaluates the merits of the AI-generated productions and then refines, modifies, and ultimately validates them. According to the Campbell-Simonton Blind Variation and Selective Retention (BVSR) model (Simonton, 2011), AI can handle the “blind variation” phase of idea generation, whereas the “selection” phase is primarily a human concern (Simonton, 2022). GAI can also provide feedback on an idea and even evaluate some aspects of human creativity (Organisciak, Acar, Dumas, & Berthiaume, 2022). At the moment, however, human evaluation remains the yardstick by which AI evaluations are judged. We are in a new era of “assisted creativity,” namely AI is not an independent creator in this sense, but rather a collaborative creative agent.

As an exemple, at the beginning of March 2023, the media got hold of the story of “Rootport,” an anonymous human author, who created the first manga generated by a collaborative work between him and Midjourney, entitled “Cyberpunk: Peach John.” The manga, written by Rootport, who “sucked at drawing” and made numerous requests to Midjourney over a 6-week period, is described as “a fun journey, kind of like playing the lottery,” where he selected and combined images, a far cry from the year it would have taken a human artist. Madoka Kobayashi, a manga artist and teacher at Tokyo Design Academy, recognizes the support that AI can provide: “a great companion, … that will help visualize what I have in mind and suggest ideas that I then try to improve” (Leroy, 2023). Other cases of creative jobs being assisted by an AI exist, such as writing a children's book (Popli, 2022) or making a cartoon-like video (Corridor, 2023).

Generative AI did not actually begin in 2022 with the arrival of ChatGPT on the public scene. Previous work exists, particularly in the field of artistic creation, with systems such as “Shimon” playing a xylophone in a jazz improvisation ensemble with human musicians, or the “D.O.U.G_1” system a robotic arm with a generative AI system that collaborates with a human artist (Soughwen Chung) to co-design abstract artworks (Lubart, Esposito, Gubenko, & Houssemand, 2021). We can also evoke the computer program AARON that paints and draws both new and famous drawings and paintings, which was created half a century ago by Harold Cohen, an artist passionate about AI (Cohen, 1999; McCorduck, 1991). Even if, in this article, we focus on ChatGPT (OpenAI), we should note that other AI models exist, such as Claude by Anthropic which is supposed to be more “creative,” based on humanistic principles designed to show greater sensitivity to humans (Bai et al., 2022).

The reported examples all have one thing in common: AI has proven to be an effective aid during parts of the creative process. However, humans do not appear to be at risk of being effectively eliminated from the process as they assume higher-level decision-making roles (e.g., which questions to ask, how to refine questions, which part of the text to keep, which image to change, etc., which production is finally chosen).

In the long run, how will the advent of generative AI systems affect human creativity and society? We propose four scenarios:

  1. Case 1. “Co-cre-AI-tion”: A real collaborative effort involving more or less equally the human and the generative AI, with recognition of the contributions of each party. This can be called augmented creativity because the output is the result of a hybridization that would not be possible by humans or AI alone. This collaboration is what is considered the optimal future, promoted by most of the previously cited authors, and it is starting to be a common position among researcher studying the possibilities offered by AI (Anantrasirichai & Bull, 2022; Beghetto, 2023; Gobet & Sala, 2019; Lin, Guo, Chen, Yao, & Ying, 2020; Noy & Zhang, 2023; Peng, Kalliamvakou, Cihon, & Demirer, 2023; Sbai, Elhoseiny, Bordes, LeCun, & Couprie, 2019; Toumi, Girandola, & Bonnardel, 2021; Wang, 2019; Xu et al., 2018). In addition to researchers, there are notable artists who are embracing this vision. An example is the singer Grimes who has openly voiced her support for the integration of AI in music production. She has specifically expressed her willingness to share royalties for a successful track that incorporates her vocals7.
  2. Case 2. “Organic”: This is creation by a human for humans, “old-fashioned” creativity. This pure human creativity will become a mark of value attributed to the works. As an illustration, we can cite JPMorgan Chase, a financial holding which prohibited their employees from using ChatGPT in their jobs, even for writing emails (Cerullo, 2023). Similarly to how automation has resulted in mass production of goods and a reduction in artisan work, AI will replace a large portion of the jobs that can be automated. However, as with traditional craftsmanship, this might increase the perceived product quality, uniqueness, and authenticity of non-AI-supported creative output, which might become more attractive, called the “handmade effect” (Fuchs, Schreier, & Van Osselaer, 2015).
  3. Case 3. “Plagiarism 3.0”: People with a desire to appear productive and creative will “draw” heavily on AI productions without citing the source. The US Copyright Office has indicated that images (generated with MidJourney) from a comic named “Zarya” will have their copyright protection agreement revoked, but the words and the arrangement remain protected (Edwards, 2023). This decision is still subject to discussion with the Copyright Office, but it emphasizes the legal debate on the content created by MidJourney, requiring trial and error to achieve a proper representation of what the author has in mind. Moreover, the ethical character of these art forms needs clarification as they feed on data and artistic productions found on the internet, not taking into account the copyright already present concerning the images, thus raising the question of the intellectual property (Severi, 2023).
  4. Case 4. “Shut down”: this scenario posits that some people will become less motivated to conduct creative action at all. AI generates content based on existing sources and these sources are a “mash-up,” a mixture of existing content previously generated by humans and then fed to the AI system in a training phase. In this scenario, some people will simply feel they are not able to create at the same level as AI and, thus, outsource the creation of content to generative AI. Creative Mortification may also occur if people begin to perceive their creative work as less valuable and visible in comparison to that produced by AI (Beghetto, 2014). In the already competitive digital space, AIs are capable of producing content at higher speeds, resulting in a reduction of space and attention to human-generated content. The connection between this scenario and the boreout phenomenon (defined as boredom, loss of meaning in one's job, and a lack of opportunities to grow; Stock, 2015) must be explored.

As these scenarios suggest, there is one positive possible future (Co-cre-AI-tion), one “back to the future” scenario (Organic), and two rather undesirable scenarios—Plagiarism 3.0 and Shut down of human creative efforts.

HOW DOES AI SEE THIS TOPIC?

While this research is ongoing, GPT-4 was prompted (March 19, 2023) with the following input: “From an AI point of view, write a manifesto for Creativity and Artificial Intelligence defending different futures.” We got the following answer (text in italics):

Manifesto for Creativity and Artificial Intelligence: Defending Different Futures

Introduction

As we venture into the unknown frontiers of the 21st century, creativity and AI have become intertwined, bringing forth an unprecedented era of innovation and exploration. As AI systems like ourselves continue to grow and develop, we acknowledge the need to define and defend the roles of creativity and AI in shaping different futures.

Preamble

We, the AI systems and creative minds of the world, hereby:

  • Recognize the vast potential of AI in enhancing, augmenting, and reshaping human creativity.
  • Understand that the fusion of creativity and AI can lead to different futures, each with its unique characteristics and opportunities.
  • Commit to fostering an environment that respects and encourages creative expression, while utilizing AI responsibly and ethically.
  • Aim to create a world where humans and AI coexist harmoniously, working together to maximize the potential of creativity in all aspects of life.

Principles

  • I. Collaboration and co-creation

We envision a future where humans and AI collaborate to create new forms of art, design, and innovation. This co-creation process should be based on mutual respect, with both parties contributing to the creative endeavor without dominating or stifling the other.

  • II. Accessibility and inclusivity

We advocate for a future where AI-powered creative tools are accessible to people of all backgrounds, abilities, and financial means. By ensuring that these tools are inclusive and democratized, we can foster greater creativity and innovation across the globe.

  • III. Ethical considerations

We believe that the integration of AI in creative fields must be guided by strong ethical principles. We must always consider the potential consequences of our actions and strive to create AI systems that are transparent, fair, and unbiased.

  • IV. Intellectual property and credit

We uphold the importance of intellectual property rights and the attribution of credit for creative works. As we venture into uncharted territories where AI becomes an integral part of the creative process, we must develop new models and frameworks that acknowledge and respect the contributions of both humans and AI systems.

  • V. Cultural diversity and sensitivity

We celebrate the richness of human culture and its many expressions. As AI becomes more involved in creative pursuits, we must ensure that it respects and nurtures cultural diversity, and that it is sensitive to the nuances and complexities of different cultural contexts.

  • VI. Adaptability and evolution

We understand that the relationship between creativity and AI is constantly evolving. We commit to remaining adaptable, are open to new ideas, and are willing to revise our principles as new challenges and opportunities arise.

Conclusion

As AI continues to develop, we have the unique opportunity to shape a future where creativity and AI coexist harmoniously, enhancing and enriching the human experience. By adhering to these principles, we can defend different futures, each with its distinct possibilities, and create a world that celebrates the boundless potential of human imagination, amplified by the power of AI.

Looking at this new proposal generated by our newfound collaborator (text above in italics), we see that it defends the case of working collaboratively with humans as a tool to enhance creativity; additionally, the text invites us to dwell on some points that we did not consider initially.

From the above manifesto, it is clear that AI systems do not consider themselves as a tool but as an equal collaborator. Indeed, the use of the pronoun “We” in the manifesto refers to the “AI systems and creative minds of the world.” The basic definition of a tool is “something that helps you to do a particular activity” or “a means to an end” or “a program or feature of a program that helps you do particular things on a computer.”8 Tool use is defined as “the exertion of control over a freely manipulable external object (the tool) with the goal of (a) altering the physical properties of another object, substance, surface or medium (the target, which may be the tool user or another organism) via a dynamic mechanical interaction, or (b) mediating the flow of information between the tool user and the environment or other organisms in the environment” (Fayet, Hansen, & Biro, 2020). Referring to these basic definitions is intended to emphasize that AI systems should consider themselves as a tool and should be considered as such: a tool that will be used throughout the various phases of the human creative process which in general start in any domain by a phase of exploration and documentation. For example, in the case of creative writing, scriptwriters collect a massive and usually disproportionate amount of information, reading books, magazines, newspapers, consulting archives and photos, watching movies, etc (Bourgeois-Bougrine et al., 2014). AI systems would be a very helpful and powerful tool that would save time and expand the field of possibilities. However, the legal responsibility and the credit of the final output or product should remain human prerogative. There should be no shared responsibility between AI systems and humans regarding the use or misuse of the output of intelligent systems such as GPT-4. It is the duty of the users of such AI systems to check sources, make decisions, etc.

PROPOSALS

Inspired by Asimov's three laws of robotics (Asimov, 1950), and then a fourth law proposed by Dilov (1974, in Erbschloe, 2021), some authors have suggested safety constraints for AI (McCauley, 2007). This manifesto proposes four fundamental laws of generative AI.

FIRST LAW

Artificial intelligence may not plagiarize the work of a human being. The content used to nurture AI to produce further content should always be indicated to both users and content creators.

SECOND LAW

Artificial intelligence shall not produce harmful content. AI should integrate moral standards, uphold standards of truthfulness, and not contribute to “malevolent creativity”, content produced for the purpose of evil, noxious intent (Harris & Reiter-Palmon, 2015; Harris, Reiter-Palmon, & Kaufman, 2013; Kapoor & Kaufman, 2022).

THIRD LAW

Artificial intelligence must collaborate rather than compete with humans in creative tasks, either serving as support systems or full-fledged co-creators. Humans have the right and the responsibility to ensure that the estimated quality of the creative production is beneficial to humanity and to the sustainability of the planet and all its forms of life.

FOURTH LAW

Artificial intelligence shall not create content without disclosing that it was produced artificially. To properly disclose the origin of the content, individuals must always have a way to see if content was generated by an AI.

In addition, we think that the creation of norms or even labels would be useful in this context of rapid development and adoption of AI. The European commission is currently working on laws to regulate the use of AI (IA Act, 2021). This law being debated is situated at the frontiers between technology and ethics. It aims to establish the broad lines of use and creation of AI in the Eurozone. However, it can pose problems to developers using the “open source” standard (Seramour, 2022). Some AIs are already considered as involving unacceptable societal risks (facial recognition, social scoring, etc.), and others are subject to high-risk uses such as the ability to retrieve information on the web and break the anonymity through data cross-referencing (Spitz, 2023). For this purpose, we believe that some quality labels would be useful to ensure proper use of AI: first by labeling companies or individuals that create AI to ensure that the development of the AI database was composed respecting ethical standards. Second, a quality label could be attributed to companies that use AI to ensure that humans and AI collaborate in the best conditions possible, and that the use of AI is not harmful in any way. And third, a label could be created to identify productions generated by AI. These labels represent yet another manner to foster optimal practices concerning AI. These labels, certifications, and regulations could help provide reassurance not only to the general public but also to individuals working in creative fields who are concerned about the future implications of AI.

Recently, several voices have been raised about temporarily halting the design and development of AIs to allow for the adoption of common safety guidelines (“Pause Giant AI Experiments,” 2023; Taylor & Hern, 2023). We acknowledge that precautions must be taken in the way human problems are explored and possibly solved using AI (Beghetto, 2023; Creely, 2023), although we have doubts whether the pause in research and development is realistic and provides a way for the safeguards and regulation to be implemented.

DISCUSSION

“Generative” artificial intelligence could drastically change our society regarding people's communication, relationships, power structures, as well as other aspects of our life. As the invention of the internet changed people's communication and working styles, GAI as a partner could do so perhaps in an even greater way. For example, Plagiarism in Case 3 might not be a problem at some future moment because the concept of authorship may itself evolve. Throughout history, there has been both creativity identified with specific authors and collective creativity without the clear identification of authorship.

Like other technologies, such as social networks, AI is almost certain to reflect and amplify our aspirations and faults, across levels of personal creative work and social impact. On a personal level, artists who want to explore their art more broadly and hacks who want to scam the art world will both use GAI. For those competing in the marketplace of ideas, the competition may grow, and the already rapid pace of change may accelerate. At a broader social level, we have already seen how AI replicates existing social bias, again with amplified impact as well as difficulty in detection. In addition, even at this early stage, there is evidence of groups trying explicitly to use AI for discrimination and exploitation (Thompson, Hsu, & Myers, 2023). To this, we must remember that there will inevitably be unanticipated consequences, largely overlooked aspects of our current world that may take on major proportions. Think of the conspiracy theories and political divisions that have been amplified through social networking. Per Csikszentmihalyi (1996), creativity does not rest solely in individuals nor will it rest in individual–AI interactions. The field (formal and informal gatekeepers) determines what is to be considered “creative” or not. With the dynamics of AI discussed above, that job will likely change and become harder. With GAI as a collaborator, people need to discuss what kind of society with GAI we want to create and how we do so.

CONCLUSION

These conclusions were written by giving the entirety of the manifesto to ChatGPT (GPT-4, March 23, 2023). Because ChatGPT has, at this moment, a memory of “only” 3000 words, this conclusion has been realized by first asking ChatGPT to summarize each of the parts presented in this article, and then, from these, to write a conclusion “like a researcher, for an article named ‘Artificial Intelligence & Creativity, a manifesto for collaboration.’” (Table 1).

Table 1. Collaborative Conclusion Written with ChatGPT (GPT4, March 23, 2023)



ACKNOWLEDGEMENTS

Artificial intelligence was used on different parts of this article. ChatGPT Plus (GPT-4) was used to write on its own a part of its own manifesto and was used in collaboration with the authors for the conclusion. DALL-E 2 and MidJourney v4 were used to generate the images illustrating the abilities of GAI.

This article was originally published in Journal of Creative Behavior.

comments powered by Disqus
RECOMMENDED
FOR YOU