Lizenz CC BY 4.0

Creative Commons 4.0 International Licence [CC BY 4.0]
 


Das Bild "" (2023) von Christian Huber ist unter der Creative Commons Attribution 4.0 International Lizenz veröffentlicht. Verwenden Sie das Bild für Ihre eigenen Materialien, bearbeiten und adaptieren Sie es, mixen Sie das Bild mit anderen Inhalten und vertreiben Sie es unter Nennung der originalen Urheberschaft. Wenn Sie etwas verändern, machen Sie dazu zusätzlich eine Anmerkung im Zitat.
 


Creative Commons Zitation:
 
, 2023 Christian Huber, CC BY 4.0 http://creativecommons.org/licenses/by/4.0/,

DE  |  EN

AI & Academic Writing

artificial intelligence can support scientific / academic writing


 

Here you will learn to use AI and pre-trained Language Models correctly for your thesis / scientific work and to assess the limitations.

How does AI work, what support does it offer and where are the limits and risks?

durchschnittliche Dauer der Lektion : 15 Minuten


 

 

 

 


 

 

Artificial Intelligence (AI), Machine Learning, Neural Networks, Deep Learning & Language Models ^ 

The following content was generated by ChatGPT with version GPT-3 "Describe the categories of Artificial Intelligence, Machine Learning, Neural Networks, and Deep Learning. Where are Language Models included in these categories?":

Artificial Intelligence (AI) refers to the ability of machines to simulate human intelligence and behavior. AI systems can learn and adapt to the environment, recognize patterns, and make decisions without human input. AI technologies can be categorized into three main sub-categories: Machine Learning (ML), Neural Networks (NN), and Deep Learning (DL).

Machine Learning (ML) is a subset of AI that focuses on developing and using algorithms to identify patterns and make predictions from data. ML algorithms can be used to detect fraud, predict customer churn, or recommend products.

Neural Networks (NN) are a type of ML algorithm that use a network of connected nodes to learn from data and classify or predict outcomes. NN models are used for image recognition and language processing.

Deep Learning (DL) is a subset of ML that uses neural networks to build more complex models for better accuracy. DL is used for complex tasks such as natural language processing, computer vision, and autonomous driving.

Language Models are a type of ML algorithm that are used for natural language processing (NLP). Language Models are used to identify and generate natural language text, such as for chatbots, speech recognition, and translation services.

The word intelligence is actually used in a misleading way, because it is not intelligence in the human sense, but all results are a result of statistical probability calculation. The algorithms find patterns and can then predict with a certain probability which result is closest to the trained data set.

Training process of Language Models ^ 

The following content was generated by ChatGPT with version GPT-3 "Explain how Language Models learn and operate.":

Language Models are algorithms used to predict the likelihood of a certain sequence of words or characters within a body of text. They are often used in natural language processing (NLP) tasks such as machine translation, text summarization, and sentiment analysis.

Language Models learn by analyzing patterns in text data. They use statistical techniques to look for patterns in the text that indicate the probability of a certain sequence of words or characters. For example, if a Language Model is trained on a corpus of text data, it might notice that the phrase “the cat in the hat” is more likely to appear than “the dog in the hat”.

Once the Language Model has learned these patterns, it can be used to make predictions. For example, when presented with an incomplete sentence, the model can predict which word or phrase is most likely to follow based on its learned patterns.

Language Models can also be used to generate text. For example, a Language Model might be fed a sentence such as “The cat in the hat was” and asked to generate the next word or phrase. It will look at the patterns it has learned and select the word or phrase that is most likely to follow.

Process Neural Network

AI Tool selection ^ 

Responses of a Language Model can only be in the context of the trained data. There are Language Models that are trained with conversations, chat room discussions, customer service interactions, social media conversations rather unspecifically. ChatGPT is one of these. Here, responses can be generated with little subject matter but very natural, spoken language.

Other Language Models have been trained with scientific texts. Mostly these are scientific and medical texts. Here, the focus is less on subject-specific answers, i.e. on better scientific language and support for scientific publications.

A variety of Language Models and AI tools from small startups and companies focus on specific answers in a concise but limited context. For example, by training with information from a municipality or region, a Language Model can be generated to help citizens with questions about government and public services.

Research which data was used to train a language model and select the appropriate one. There is no single tool for all questions!

Opportunities and problems with answers from Language Models ^ 

When using pre-trained Language Models to support personal, professional, and academic contexts, it is important to know and understand what opportunities as well as challenges and problems exist.

The following content was generated by ChatGPT with version GPT-3 "List advantages and problems of Language Models.":

Advantages:

  1. Language Models allow for natural language understanding and enable conversational AI.
  2. They enable more natural and accurate interactions with machines, allowing for a better user experience.
  3. They can help to process large amounts of data, such as text, quickly and accurately.
  4. They allow for the creation of more complex and powerful artificial intelligence models.

Problems:

  1. Language Models are prone to overfitting, leading to poor generalization and over-interpretation of data.
  2. Language Models can suffer from a lack of data or quality data, leading to poor performance.
  3. Language Models can be difficult to interpret and debug, leading to long development cycles.
  4. Language Models are prone to bias and errors due to their reliance on human-generated data.

Challenges & Problems ^ 

The following content was generated by ChatGPT with version GPT-3 "List problems of Language Models in detail.":

  1. Data Sparsity: Language Models need a large amount of data to accurately capture the nuances of language and context. Without enough data, the model can’t learn the patterns and nuances of language and will make incorrect predictions.

  2. Long-Tail Data: Language Models can struggle with long-tail data, which is data that is rarely seen but should still be accounted for.

  3. Structured vs Unstructured Data: Language Models often struggle with unstructured data since they require a structure to be useful.

  4. Lack of Context: Language Models can fail to capture the nuances of a given context, leading to incorrect predictions.

  5. Out-of-Vocabulary Words: Language Models can struggle with out-of-vocabulary words, which are words that are not in the training data and are not accounted for in the model.

  6. Biases and Stereotypes: Language Models can learn biases and stereotypes from the data, which can lead to incorrect predictions or unfair outcomes.

  7. Unnatural Speech Patterns: Language Models can struggle to understand natural speech patterns, which can lead to incorrect predictions.

  8. Computational Complexity: Language Models can be computationally complex, requiring a lot of computing power and time to train.

  9. Data Quality: Language Models can suffer from poor data quality, which can lead to poor results.

  10. Overfitting: Language Models can fit the training data too well, leading to poor results on unseen data.

The problems that Language Models can create in the answers include hallucination, prejudice, and misinformation. In addition, the superficiality of the answers to many questions is remarkable.

The following content was generated by ChatGPT with version GPT-3 "Describe in detail hallucinations, biases, misinformation, superficial answers, inability to cite correctly, and out-of-date data.":

Hallucinations: Language Models can generate seemingly accurate outputs that are not based on any real data. This is often seen when Language Models are used to generate text or speech, and the model generates something that sounds like a real response, but is actually made up.

Biases: Language Models can contain biases that can lead to unfair or inaccurate results. Biases can be based on gender, race, or any other kind of prejudice. For example, a Language Model may be trained on a large corpus of text that contains a disproportionate amount of male-oriented language, leading it to generate more male-oriented responses.

Misinformation: Language Models can produce false or misleading information. This can be especially problematic when Language Models are used in areas such as healthcare, where accurate information is critical.

Superficial Answers: Language Models can generate responses that are too simplistic or shallow. This can be a problem when the model is used to answer complex questions, as the model may be unable to generate a full, accurate response.

Inability to Cite Correctly: Language Models can generate text or speech that does not accurately cite the original source of information. This can lead to incorrect attribution or plagiarism.

Out-of-Date Data: Language Models can contain outdated information or data that is no longer accurate. This can lead to inaccurate or misleading results.

For these reasons, you need professional judgment and critical reflection, as well as expert and general knowledge, in order to recognize fake news.

Opportunities for Academic Writing ^ 

A number of universities are considering banning pre-trained Language Models in exams, but especially in academic works such as term papers and theses. At some universities, the texts generated by Language Models or their assistance must be clearly indicated. As fast as Language Models evolve, solutions are developed to recognize the artificially generated texts with the help of AI.

From my personal point of view all these approaches are not or only marginally suitable. I think that dealing with Language Models has to be learned in university education. This involves a deep understanding of the opportunities but also of the problems in order to be able to assess the application areas. From my point of view, pre-trained Language Models are an additional tool, also for scientific writing.

In academic writing and writing, there are a few ways to use pre-trained Language Models (LLM).

Usage Description Note Example Promt
Inspire the Phrasing of Research Questions ask for a specified question frage nach mehreren Forschungsfragen zu deinem fokussierten Thema und lass dich so zur eigenen Formulierung anregen.

LMM wird dir die Präzisierung und finale Formulierung deiner Forschungsfrage nicht abnehmen. Dafür ist das konkrete fachspezifische und akademische Wissen nicht vorhanden (nicht trainiert). Die generierten Forschungsfragen sind sehr oberflächlich und unpräzise, können dich aber vor allem inder sprachlichen Formulierung inspirieren. |

Create an First Outline ^ 

The following content was generated by Language Models with version GPT-3. Misleading / false statements were deleted by me "How can Language Models support students in scientific writing?":

  1. Language Models can provide students with an outline for their scientific paper, prompting them for the title, introduction, methodology, results, and conclusion.

  2. Language Models can provide students with an introduction for their scientific paper, prompting them for the key points they want to discuss.

  3. Language Models can provide students with sample sentences and phrases to help them write more effectively.

  4. Language Models can provide students with advice on how to use language effectively in scientific writing.

  5. Language Models can provide students with resources for finding relevant literature to cite in their scientific paper.

  6. Language Models can provide students with advice on how to analyze and interpret their data.

  7. Language Models can provide students with help in understanding and using scientific terminology effectively.

  8. Language Models can provide students with advice on how to organize their thoughts and ideas effectively.

  9. Language Models can provide students with help in proofreading and editing their scientific paper.

  10. Language Models can provide students with advice on how to write the abstract of their paper

  11. Language Models can provide students with help in developing figures and tables to illustrate their findings.

  12. Language Models can provide students with advice on how to write a conclusion for their paper.

In addition, from my point of view, a few aspects are missing, such as support for coding and for statistical analysis, e.g. in the open source software R.

The possibilities listed above need a lot of input so that the Language Model can answer in a supportive way. Questions that are too general lead to superficial and useless answers. It is always important to reflect critically on the answers and not simply take them over unconsidered.
Once again, to put it clearly: With all these possibilities, the problems of pre-trained Language Models must always be critically reflected. Just copying the results can lead to big problems for the reasons mentioned above. At the same time, pre-trained Language Models can help by giving suggestions and by recognizing examples in the direct question-answer process, which can then help to find one's own solution.

Asking Questions ^ 

The following content was generated by Language Models with version GPT-3.

How can Language Models responses be improved? What are significant prompts and phrases easily processable by Language Models? How should queries be written and what are no-go's? Explain in detail und give examples.

Improving Language Models responses is an important part of creating an effective conversational AI. Here are some tips to help make the most out of Language Models:

  1. Be clear and concise when asking questions. Language Models is most effective when it is given clear, concise questions. Avoid long, complex sentences and try to stick to simple language. For example, instead of asking “What is the best way to prepare a steak?”, ask “How do I cook a steak?”

  2. Use simple phrases and keywords. Language Models works best when it is given simple phrases and keywords. For example, instead of asking “Where can I find a good Italian restaurant near me?”, ask “Italian restaurant near me”. This will make it easier for the AI to process and provide a more accurate response.

  3. Avoid using long strings of words or phrases. Language Models is not designed to handle long strings of words or phrases, as this can confuse the AI. Try to break up your query into multiple shorter questions if necessary. For example, instead of asking “What are the best places to eat in Tokyo?”, ask “Where are the best restaurants in Tokyo?” and “What type of food can I find in Tokyo?”

  4. Avoid sarcasm and colloquialisms. Language Models is not designed to understand sarcasm or colloquialisms. Stick to straightforward language when asking questions. For example, instead of asking “What’s the deal with this restaurant?”, ask “What is this restaurant like?”

  5. Be specific. Language Models works best when it is given specific questions. Avoid asking vague questions, such as “What’s the best way to make money?” Instead, ask a more specific question, such as “What are some ways to make money online?”

  6. Avoid yes/no questions. Language Models is not designed to answer yes/no questions. Instead, try to ask open-ended questions that require more than a simple yes/no response. For example, instead of asking “Do you like ice cream?”, ask “What is your favorite flavor of ice cream?”


Tools to support you ^ 

The following list is a selection of tool for supporting the academic writing process and scientific works. Due to a constant development, this list is never complete and the information about limitations is not always correct.

Tool Service Note
QuillBot summarizing texts, paraphrasing, writing abstracts great academic writing support, English only
Elicit literature search by research question very limited subject area of the sources
Scolarcy summary of articles English only
PaperDigest summary of papers limited dataset

 

 

Creative Commons Lizenzvertrag mit Namensnennung CC-BY  Creative Commons 4.0 International Licence [CC BY 4.0]  |  AI & Academic Writing - Christian Huber  |  23.08.2023  |  Impressum  |  Datenschutz
Die Texte auf dieser Seite wurde mit Hilfe von Generative Pre-Trained Transformers / Large Language Models lektoriert.