License CC BY 4.0

Creative Commons 4.0 International Licence [CC BY 4.0]
 


This image "" (2025) by Christian Huber is licensed under a Creative Commons Attribution 4.0 International License. Feel free to use, edit, adapt, mix and distribute this content by citing the author, the source and the license and - if you have changed something, include this information too.
 


Creative Commons attribution citation for this image:
 
, 2025 Christian Huber, CC BY 4.0 http://creativecommons.org/licenses/by/4.0/,

DE  |  EN

Research Design

The research design connects the research question and methodology. It outlines the logical and transparent planning of the research process, specifies the chosen methods, and justifies their application.


 

This chapter explains how to develop a research design based on a research question. It addresses research logic, fundamental methodological decisions, and key methods.

Translate research questions into a consistent research design, select appropriate methods, justify their use, and reflect on both the quality and limitations of the approach.


 


Summary [made with AI]

Note: This summary was produced with AI support, then reviewed and approved.


  • A research design is the blueprint of an academic study. It links the research question with the chosen methods and ensures that the process is logical, transparent and verifiable.
     
  • Its functions are logical structure, traceability and verifiability. Examples show why clear steps and justified methodological choices are necessary.
     
  • Limitations arise from data quality, sampling, contextual boundaries and possible researcher bias. Stating these openly is a mark of academic integrity.
     
  • Ethical aspects include informed consent, data protection and safeguarding against harm or discrimination.
     
  • Topic development starts with a problem statement. Criteria for a suitable topic are relevance, generalisability, availability of literature, data access and feasibility.
     
  • A research gap defines unanswered questions in the existing literature. From this, a problem statement is derived which explains its importance for theory, practice or society.
     
  • Research questions should be clear, answerable, delimited, relevant and open in terms of outcomes. Using wh-questions supports precise formulation.
     
  • Hypotheses are precise, testable assumptions derived from theory and prior research. Types include difference, correlation and causal hypotheses.
     
  • Methodological choices concern qualitative, quantitative and mixed-methods approaches. The decisive factor is the fit with the research question.
     
  • Qualitative research aims at understanding and interpretation, quantitative research at measuring and testing. Mixed methods combine both logics.
     
  • Sampling, data collection and analysis follow the chosen logic. Quality assurance requires validity, reliability, transparency and ethical reflection.
     

Topics & Content


 

 


 

 

1. Research Design as a Bridge between Question and Method ^ top 

A research design can be understood as the plan or blueprint of an academic study. It demonstrates how one moves from an initial idea or research question to verifiable results. While the research question defines what is to be examined, the research design sets out how the investigation is structured step by step.

The function of a research design is to ensure that the study is logical, transparent, and verifiable:

  • Logical Structure
    A research design guarantees that the individual steps - from identifying the topic through data collection to analysis - build logically on one another. This prevents essential intermediate steps from being omitted or results from failing to correspond to the questions posed.

    Example: A researcher who wishes to examine building users’ satisfaction cannot begin directly with data analysis but must first formulate a precise research question, choose suitable methods, and determine how the responses will be analysed.

  • Transparency
    Academic work requires that others can follow how results were achieved. A research design makes these decisions visible. It sets out why a particular method was chosen rather than another, and why the chosen approach fits the research question.

    Example: If a questionnaire is used, the researcher must be able to justify why this is more appropriate than an interview or a case study.

  • Verifiability
    Academic results must not be mere opinions. They must, in principle, be open to verification by others. A research design specifies exactly how the study was conducted so that other researchers can follow the same steps and check whether they arrive at similar results.

    Example: A study on the energy efficiency of buildings is only verifiable if it is clearly described which data were collected, how they were processed, and with which statistical procedures they were analysed.

Limitations and Constraints ^ top 

No research design is perfect. Every method involves limitations that must be considered and openly acknowledged.

  • Data Issues: There may be insufficient data, or the quality of the data may be restricted.

  • Sampling Issues: Surveys do not always reflect the entire population, as only certain groups may participate.

  • Context-Boundedness: A case study provides in-depth insights but cannot easily be generalised to all other cases.

  • Influence of Researchers: Especially in qualitative studies, personal assumptions or the way questions are formulated may influence the results.

Highlighting such limitations is not a weakness but a sign of academic integrity. It shows that researchers critically reflect on their approach and realistically assess the scope of their findings.

Ethical Considerations ^ top 

All research is embedded within an ethical framework. This primarily includes the responsible treatment of participants and data:

  • Individuals may only be surveyed or observed if they have given their consent.

  • Personal data must be protected, anonymised, or pseudonymised.

  • Findings must not harm or discriminate against anyone.

Checklist ^ top 

The checklist serves as a guide and summarises the key elements that should be considered in every research design. It helps to make the overall structure of the study visible, justify methodological decisions, and openly reflect on potential weaknesses or limitations.

The individual points of the list are to be understood as minimum requirements. Depending on the topic, scope, and research question, the level of detail may vary. What matters is that all dimensions are at least addressed and documented in the planning process.

1. Research Question

2. Aim(s) of the Study

3. Justification of the Approach

4. Planned Procedure

5. Quality Assurance

6. Reflection on Limitations

7. Ethical Note


2. From the Problem Statement to the Research Question ^ top 

Every academic study begins with a topic that is gradually specified and refined during the planning process. The starting point is a problem statement, from which a research question is developed. This research question forms the core of the study and may be further specified through hypotheses.


2.1 Topic Selection and Problem Definition ^ top 

Selecting a topic is the first crucial step in the research process. It determines the thematic framework within which the study is conducted and significantly influences motivation, feasibility, and academic quality.

Topic selection itself is a multi-stage process:

  1. Choosing an area of interest (literature, practice, society, personal interest).

  2. Checking whether the topic meets the requirements of academic work (relevance, generalisability, literature base, data access, feasibility).

  3. Identifying a research gap that shows the academic value of investigating the topic.

  4. Narrowing down to a problem statement that clearly defines the central challenge.

2.1.1 Sources for Topic Selection ^ top 

Finding a suitable topic for an academic study is the first step in the research process. A topic does not emerge by chance but through conscious engagement with academic, practical, and societal contexts. Various sources provide starting points for developing a research question that is both academically relevant and suitable for completion within the scope of a bachelor’s or master’s thesis.

  • Academic Literature

    • Current articles in academic journals, monographs, or conference proceedings reveal the state of research.

    • Particularly valuable are indications in sections such as Outlook or Further Research, where open questions are explicitly highlighted.

    Example: New approaches to evaluating sustainability certificates in the real estate sector, which are discussed in publications but have not yet been empirically tested.

  • Professional Practice

    • Research questions from companies, projects, or institutions can be examined scientifically.

    • Practical problems often provide not only interesting topics but also access to data.

    Example: Optimising energy management in a hotel business, where it is investigated scientifically which measures actually contribute to reducing energy consumption.

  • Societal Developments

    • Political debates, new legislation, technological innovations, or social trends open up current research questions.

    • These areas are dynamic and can be approached from different perspectives (technical, economic, social).

    Example: Opportunities and risks of hydrogen as an energy carrier in urban contexts.

  • Teaching and Coursework

    • Content from seminars, lectures, or projects may serve as a starting point.

    • Smaller assignments or presentations can be further developed and examined in greater depth.

    Example: A project on building user satisfaction is expanded into a systematic investigation in a final thesis.

  • Personal Interests and Observations

    • Personal experiences or everyday observations may also provide a starting point, provided they can be translated into a form that allows for academic generalisation.

    • The crucial step is linking them to a theoretical or empirical framework.

    Example: Observations on the use of smart home technologies among friends lead to a study on acceptance and patterns of use.

2.1.2 Requirements for a Suitable Topic ^ top 

Not every interesting subject is automatically appropriate for an academic study. To meet the standards of a bachelor’s or master’s thesis, a topic must fulfil certain criteria. These criteria ensure that the subject is not only engaging but also academically manageable and methodologically sound.

  • Academic Relevance

    • A topic should contribute to the advancement of knowledge or address a practical issue in an academic way.

    • Mere description is not sufficient; the study must promise a clearly recognisable gain in knowledge.

    Example: The use of hydrogen buses in Tyrol becomes more relevant when examining which factors promote or hinder their deployment, rather than simply stating the number of buses in operation.

  • Generalisability

    • Findings should have some degree of transferability beyond the specific case.

    • Even if a case study is the focus, it should be made clear which general insights can be derived from it.

    Example: A study of a single hotel is only appropriate if the results can be transferred to similar businesses or placed within a broader context.

  • Literature Base

    • A topic must be able to build on existing academic literature.

    • Without sufficient sources, a solid theoretical framework is not possible.

    Example: A thesis on a very recent trend is only suitable if there are already initial academic studies available or if neighbouring theories can be incorporated.

  • Availability of Data and Research Units

    • For empirical studies it is crucial that data can be collected or existing datasets used.

    • If interviews are planned, it must be clear that potential participants are accessible.

    Example: A survey of facility managers is only meaningful if there are connections to companies or networks that allow for a sufficient sample size.

  • Feasibility within the Scope of the Study

    • Time and organisational constraints must be considered.

    • Bachelor’s theses usually have a smaller scope and fewer resources than master’s theses, so the topic must be narrowed accordingly.

    Example: Instead of studying Sustainable Urban Development in Europe, a bachelor’s thesis would be better focused on Strategies for Green Roofs in Innsbruck.

Distinction: Project Work vs. Academic Study ^ top 

An academic study differs fundamentally from project work. Project work often aims at the practical implementation or planning of concrete measures, such as the introduction of a new energy management system in a company. An academic study, by contrast, requires the systematic investigation of a research question.

Practical examples, case studies, or company projects can certainly serve as starting points, but they must always be generalisable, addressed with academic methods, and situated within the existing body of research.

Example: While project work may plan and technically implement the energy management system of a particular hotel, an academic study investigates the overarching question: Which factors influence the success of energy management systems in the hotel sector? In this way, the focus goes beyond a single case and contributes to broader academic understanding.

2.1.3 Identification of Research Gaps ^ top 

A research gap refers to the part of a field of study for which no, insufficient, or contradictory academic evidence exists - or where established findings are not available for the relevant context, target group, method, or time frame. It justifies why a new study is necessary and highlights the expected contribution to the academic discourse. A well-documented research gap prevents the repetition of already resolved questions and directs the study towards generating genuine added value.

Important Distinctions:

  • Not mere "novelty for its own sake": A rarely studied topic is not automatically a research gap. What matters is the epistemological need (e.g. unclear cause-effect relationships, missing transferability, methodological blind spots).

  • Not a practical task: A practical implementation problem becomes a research gap only when formulated as a researchable academic question and situated within the body of existing research.

Type Description Example
Con­ten­ti­al­ Gap A re­le­vant as­pect has so far not been examined at all or only marginally In­flu­ence of ac­ous­tic com­fort on us­er sat­is­fac­tion in of­fice build­ings
Con­tex­tu­al­ Gap Find­ings ex­ist but can­not be trans­ferred to the rel­evant re­gion, sec­tor, or pop­u­la­tion Neigh­bour­hood stor­age: many stud­ies on me­tro­pol­it­an are­as, few on al­pine re­gions
Me­tho­do­lo­gi­cal­ Gap A phe­nom­en­on has al­most ex­clu­sive­ly been stud­ied with one meth­od; al­ter­na­tive ap­proach­es are lack­ing PV ac­cept­ance stud­ied main­ly through cross-sec­tion­al sur­veys; field ex­per­i­ments are miss­ing
Tem­po­ral­ Gap Old­er stud­ies no long­er re­flect cur­rent prac­tice or tech­nol­o­gy Heat pumps: stud­ies pri­or to 2020 with­out cur­rent fund­ing schemes and grid in­te­gra­tion
The­o­ret­i­cal­ Gap Con­tra­dic­tions, un­ex­plained mech­a­nisms, or lack of mod­el in­te­gra­tion ESG scores and mar­ket val­ue: di­ver­gent ef­fects, un­clear caus­al path­ways
Op­er­a­tio­nal­i­sa­tion/Data Gap Key con­structs are in­ade­quate­ly meas­ur­able or da­ta are miss­ing Smart-read­i­ness of ex­ist­ing build­ings with­out val­i­dat­ed in­di­ca­tors
Syn­the­sis/Re­view Gap Many in­di­vid­ual stud­ies but no sys­tem­at­ic syn­the­sis or re­view Us­er sat­is­fac­tion: no up-to-date sys­tem­at­ic re­view for the Ger­man-speak­ing con­text

The process from an initial topic idea to the robust justification of a research gap follows a structured and documented approach. Each intermediate result (search strings, selection criteria, extraction tables) forms part of the later methodological transparency.

Step Aim Procedure Output
1­ Ex­plo­ra­to­ry­ Ori­en­ta­tion Gain an ov­er­view, iden­ti­fy key con­cepts, the­o­ries, con­texts, and typ­i­cal meth­ods Re­view 5-10 over­view sources, note key con­cepts and vari­ables, rec­ord typ­i­cal da­ta types and de­signs, sketch ini­tial top­ic map Pre­lim­i­na­ry con­cept and top­ic list, one-page scop­ing note
2­ De­vel­op­ing­ a­ Search­ Stra­te­gy Con­duct a re­pro­du­ci­ble, broad­-cov­er­ing yet fo­cused lit­er­a­ture re­view Col­lect syn­on­yms and re­lat­ed con­cepts, use Bool­e­an op­er­a­tors (AND, OR, NOT), adapt PICO or PEO, com­bine con­trolled and free search terms, de­vel­op ex­am­ple search string Doc­u­ment­ed search strings in ver­sions, list of ac­cept­ed syn­on­yms, de­fined in­clu­sion and ex­clu­sion cri­te­ria
3­ Se­lect­ing­ Sour­ces Iden­ti­fy suit­a­ble and re­li­a­ble pub­li­ca­tion out­lets, use grey lit­er­a­ture where ap­pro­pri­ate Bib­li­o­graph­ic da­ta­bas­es, re­pos­i­to­ries, pre­prints, qual­i­ty-as­sured re­ports, con­fer­ence pa­pers List of sour­ces with pur­pose (da­ta­base, the­o­ry, meth­od, con­text)
4­ Con­duct­ing­ a­ Search­ Log En­sure re­pli­ca­bil­i­ty and trans­par­en­cy For each search, rec­ord date, da­ta­base, search string, fil­ters, num­ber of hits; ver­sion and jus­ti­fy changes Com­plete search log as ta­ble or ap­pend­ix
5­ Screen­ing­ with­ Cri­te­ria Re­li­a­bly se­lect rel­e­vant stud­ies and re­duce bias Ti­tle-ab­stract screen­ing, full-text screen­ing, back­ward and for­ward snow­ball­ing, note rea­sons for ex­clu­sion Ov­er­view of found, screened, and in­clud­ed stud­ies; screen­ing log
6­ Map­ping­ and­ Syn­the­sis Sys­tem­at­i­cal­ly re­cord what is known and where gaps ex­ist Ex­tract key in­for­ma­tion, vi­su­al­ise, syn­the­sise con­sist­en­cies and con­tra­dic­tions Ex­trac­tion ta­ble and short syn­the­sis note
7­ Gap­ For­mu­la­tion­ and­ Va­li­da­tion Spec­i­fy and con­text­u­al­ise the gap pre­cise­ly, test plau­si­bil­i­ty Form­u­late gap state­ment, jus­ti­fy rel­e­vance, test fea­si­bil­i­ty, val­i­date through feed­back and ro­bust­ness check Fi­nal gap state­ment with jus­ti­fi­ca­tion of how the study ad­dres­ses the gap

2.1.4 Deriving a Problem Statement from the Research Gap ^ top 

The research gap identifies the area in which existing studies and findings provide no answers or only insufficient ones. However, this alone does not yet explain why precisely this gap is academically and practically relevant. To justify its examination in a bachelor’s or master’s thesis, the research gap must be translated into a problem statement.

The problem statement specifies why the identified lack of knowledge constitutes a challenge. It links the academic starting point with its relevance for theory, practice, or society. While the research gap answers the question "What is missing in the literature?”, the problem statement formulates the follow-up question "Why is this absence problematic - and why is it worth investigating?”

Key steps in formulating a problem statement:

  • Positioning within the state of research:
    The problem statement must clearly demonstrate how it arises from the identified gap.

    Example: "Numerous studies examine thermal and lighting comfort in office buildings. However, the influence of acoustic factors remains largely neglected."

  • Justifying relevance:
    It is not enough to state that something has not yet been studied. It must be explained what consequences result from neglecting this aspect.

    Example: "Since noise is perceived as a major problem in many open-plan offices, the lack of research on acoustic comfort leads to an incomplete understanding of user satisfaction."

  • Establishing concreteness:
    The problem statement must be formulated so that it can be directly translated into a research question. This requires narrowing it to specific contexts (e.g. region, time, population).

    Example: "It remains unclear in particular what role acoustic conditions play in Austrian office buildings and how they influence user satisfaction."

Characteristics of a good problem statement:

  • derives clearly and transparently from a documented research gap.
  • demonstrates the practical and/or theoretical relevance of the problem.
  • is formulated so that a concrete research question can be developed from it. Level Guiding Question Example
    Re­search­ Gap What is miss­ing in the ex­ist­ing lit­er­a­ture or in­ad­e­quate­ly stud­ied The in­flu­ence of ac­ous­tic com­fort in of­fice build­ings has so far hard­ly been stud­ied
    Prob­lem­ State­ment Why is this ab­sence rel­e­vant - the­o­ret­i­cal­ly or prac­ti­cal­ly As noise is a com­mon prob­lem in open-plan of­fices, ne­glect­ing ac­ous­tic fac­tors leads to an in­com­plete un­der­stand­ing of us­er sat­is­fac­tion
    Re­search­ Ques­tion How should this prob­lem be in­ves­ti­gat­ed con­crete­ly What role do ac­ous­tic con­di­tions play in the sat­is­fac­tion of us­ers in Aus­tri­an of­fice build­ings

2.2 Research Question ^ top 

The research question forms the core of an academic study. It translates a previously formulated problem statement into a precise, researchable question and thereby guides all subsequent decisions of the research design, from the choice of methods to the analysis. A clearly formulated research question delineates the object of investigation, makes expectations for data and analysis transparent, and allows the progress of knowledge achieved in the study to be assessed in a comprehensible way.

2.2.1 Characteristics of a Good Research Question ^ top 

A research question translates the problem statement into a precise academic inquiry. Good research questions are characterised by the following features:

  • Clarity and Precision
    Formulations are unambiguous, central terms are understandable and, where necessary, defined. Ambiguous or metaphorical expressions are avoided.

    imprecise: "How does digitalisation have an impact?"

    refined: "How does the introduction of a digital building management system influence the electrical energy consumption of office buildings in Austria?"

  • Answerability
    The question can realistically be addressed with the available methods, data, and resources. This includes a broad idea of which types of data are required and how they can be collected or obtained.

  • Defining Boundaries
    Space, time, population, and object of investigation are specified. Such boundaries increase academic feasibility and transparency.

    Example: "in Tyrol", "between 2018 and 2024", "users of co-working spaces", "existing office buildings"

  • Relevance
    The question promises academic or practical value, e.g. by clarifying contradictory findings, testing a theoretical mechanism, or deriving well-founded recommendations for practice.

  • Openness of Results
    The question does not pre-empt answers or evaluations. It is open to different, including unexpected, findings.

Formulating Research Questions

When formulating a research question, it is helpful to ensure that the essential dimensions of the study are explicitly stated: what is being studied, who or what is affected, where the study takes place, when it is relevant, how a process unfolds, and why certain relationships exist. Not every research question must address all of these aspects, but reflecting on them supports precision, clarity, and academic rigour. Descriptive questions often focus on what and where, while explanatory questions are more strongly guided by how and why. This provides a practical framework for developing well-structured and meaningful research questions.

Guiding Aspect Function in the Research Question Example
What Defines the subject or phenomenon under investigation What are the most important factors influencing user satisfaction in office buildings?
Who Specifies the group or population involved Who uses co-working spaces in Tyrol, and what expectations do these users have?
Where Establishes the spatial context Where do differences in the acceptance of photovoltaic systems emerge between urban and rural regions?
When Determines the temporal framework of the study When do seasonal variations in the energy consumption of student halls of residence occur?
How Focuses on processes, mechanisms, or relationships How does the introduction of an energy management system affect electricity consumption in commercial properties?
Why Targets causes, background factors, or explanations Why do municipalities decide in favour of or against the use of hydrogen buses?

Example

Unsuitable: "Why is renewable energy the best solution for all problems?" - this already contains a claim and is too general.

Refined: "Which factors influence the decision of Austrian municipalities to adopt photovoltaic systems?" - precise, researchable, and open to different possible outcomes.

2.2.2 Types of Research Questions ^ top 

Research questions can be distinguished according to their intended contribution to knowledge. Such a typology helps to plan methodological fit and clarify the expected form of evidence.

Type Characteristic Example
De­scrip­tive De­scrib­ing a phe­nom­enon or sit­u­a­tion "What is the share of timber con­struc­tion in new build­ings in Tyrol in 2023?"
Ex­plana­tory Ana­lys­ing cau­ses and re­la­tion­ships "Why do com­pa­nies de­cide in fa­vour of cer­tain cer­ti­fi­ca­tion sys­tems in fa­cil­i­ty man­age­ment?"
Prog­nos­tic Fore­cast­ing fu­ture de­vel­op­ments "How is the ac­cept­ance of hy­dro­gen mo­bil­i­ty like­ly to de­vel­op over the next ten years?"
De­sign-Or­ien­ted De­vel­op­ing meas­ures or ac­tion op­tions "Which stra­te­gies are suit­a­ble for im­ple­ment­ing cir­cu­lar econ­o­my con­cepts in hous­ing con­struc­tion?"
Eval­u­a­tive As­sess­ing pro­ces­ses or pro­grammes "How ef­fec­tive are gov­ern­ment sub­si­dy pro­grammes for in­tro­duc­ing heat pumps in ex­ist­ing build­ings?"

2.3 Developing Hypotheses ^ top 

The research question is usually formulated openly and aims to investigate a phenomenon systematically. It marks the starting point of the research process and sets the direction for design, data collection, and analysis.

The hypothesis is a precise and testable assumption derived from theory and the state of research, formulating an expected relationship or difference. Hypotheses are primarily used in quantitatively oriented designs but may also play a role in mixed-methods studies.

  • Research Question: "Does room temperature influence job satisfaction in offices?"

  • Hypothesis: "The higher the perceived room temperature, the lower the reported job satisfaction."

The hypothesis differs from the research question in its directed statement and the possibility of being confirmed or falsified through data. It makes explicit which effect is expected and how it is likely to manifest.

2.3.1 Criteria of Good Hypotheses ^ top 

Hypotheses summarise theoretical expectations. For them to be academically fruitful, they should meet the following criteria:

  • Testability
    A hypothesis must be testable with empirical data, e.g. through statistical models, experiments, or clearly defined comparison groups. Non-testable, purely normative statements are unsuitable as hypotheses.

  • Clarity
    The formulation is unambiguous and avoids vague terms. Variables and relationships are explicitly named, and directions of measurement are specified where appropriate.

  • Justification
    The hypothesis is grounded in theory and the state of research. It follows logically from existing models, findings, or plausible mechanisms.

  • Falsifiability
    It must be possible to refute the hypothesis with data. Statements that remain "true" regardless of the outcome are not scientifically testable.

  • Simplicity
    Complexity is reduced to what is necessary. A simple, clearly testable hypothesis is preferable to an overloaded multiple claim, provided the theory allows for this.

Example

Unsuitable: "Some people think renewable energy is good" - not testable and too vague.

Refined: "Households with higher educational levels show greater acceptance of renewable energy" - clear, testable, and theoretically justifiable.


2.3.2 Types of Hypotheses ^ top 

Hypotheses can differ in content and can be classified according to their logical structure and the type of expected relationship between variables. A conscious decision for a particular type increases transparency, strengthens methodological alignment, and ensures that the results of the study are clearly interpretable.

A systematic classification helps to formulate hypotheses precisely and to select appropriate methods of testing. The following types of hypotheses are particularly relevant in academic work - especially in technical and socio-economic fields of study.

  1. Difference Hypotheses
    Difference hypotheses express the expectation that two or more groups or conditions differ from each other. They address the question:
    Is there a difference between Group A and Group B?

    Example: "Job satisfaction is higher in buildings with daylight access than in buildings without daylight."

    Such hypotheses usually involve comparing two or more means and are often tested using statistical methods such as the t-test or analysis of variance.

  2. Relationship Hypotheses (Correlational Hypotheses)
    Relationship hypotheses assume that two or more variables are related. The direction of the relationship may remain open or be specified.

    Example: "The higher the thermal comfort, the higher the reported user satisfaction."

    These hypotheses are often tested using correlation or regression analyses and are typical of exploratory research questions.

  3. Causal Hypotheses
    Causal hypotheses go a step further: they claim not only a relationship but also a cause-effect connection. This makes them particularly demanding, as causality can only be demonstrated under strict methodological conditions, such as through experiments or controlled quasi-experimental designs.

    Example: "The introduction of an energy management system leads to a significant reduction in electricity consumption."

    The main challenge lies in ruling out alternative explanations (confounding or third variables).

  4. Directional Hypotheses
    Directional hypotheses not only predict a difference or relationship but also specify in which direction it goes.

    Example: "Acceptance of photovoltaic systems is higher among younger respondents than among older ones."

    They are more precise than non-directional hypotheses and require a solid theoretical or empirical rationale for the assumed direction.

  5. Non-Directional Hypotheses (Two-Sided Hypotheses)
    Non-directional hypotheses merely state that a difference or relationship exists, without predicting its direction.

    Example: "There is a difference in the acceptance of photovoltaic systems between younger and older respondents."

    They are broader in scope and are often used when the theoretical or empirical basis for a directional hypothesis is lacking.

  6. Null Hypothesis (H₀) and Alternative Hypothesis (H₁)
    In empirical research, especially in statistics, a distinction is made between the null hypothesis and the alternative hypothesis:

    • Null Hypothesis (H₀): There is no difference or relationship.
    • Alternative Hypothesis (H₁): There is a difference or relationship.

    The null hypothesis is tested statistically. If it is rejected, the alternative hypothesis is assumed.

    Example:
    H₀: "There is no difference in the acceptance of photovoltaic systems between urban and rural areas."
    H₁: "Acceptance of photovoltaic systems is higher in urban areas than in rural areas."


3. Research Logic and Fundamental Methodological Decisions ^ top 

The research design forms the foundation of every academic study and determines how a research process is structured and conducted. It involves not only the practical planning of individual steps but also fundamental considerations that shape the entire process of generating knowledge. Every academic project operates within a field of tension between theoretical orientation, methodological implementation, and practical feasibility.

Research logic describes the pathway through which knowledge is generated and addresses the question of how new insights can be developed on the basis of existing knowledge. Fundamental methodological decisions provide the framework within which data are collected and analysed. Both - logic and methodological orientation - are inseparably connected. Without a clear idea of the epistemological approach guiding the research, every methodological decision remains fragmented.

For researchers, this means that they must engage with the underlying logics of academic reasoning at the very beginning of their work. These logics are not merely abstract theories but determine whether results are verifiable, transparent, and transferable. Equally important are the fundamental methodological decisions that define which types of data are collected, how they are processed, and what conclusions can be drawn from them.


3.1 Choice of Research Approach: Qualitative, Quantitative, Mixed Methods ^ top 

The choice of research approach determines the types of data, modes of analysis, explanatory power, and limitations of the results. The decisive factor is the fit with the research question, the theoretical framework, available resources (time, budget, access to fields/participants, data quality), and ethical requirements. Approaches are not strictly separate "camps”; rather, they form a continuum that can be meaningfully combined depending on the research question.

3.1.1 Qualitative Research - Understanding, Interpreting, Contextualising ^ top 

Qualitative research focuses on meanings, patterns of interpretation, processes, and contexts. It primarily addresses "how?” and "why?” questions when phenomena are under-researched or when a deep understanding of participants’ perspectives is required. Sources of data include interviews (semi-structured, narrative, focused), focus groups, observations (participant/non-participant), field notes, documents, artefacts, or audio-visual materials. Sampling typically follows purposive strategies (e.g. theoretical sampling, maximum variation, contrasting cases) in order to capture relevant cases.

Analytical techniques include thematic or content-structuring analysis, Grounded Theory coding, discourse/frame analysis, interpretative phenomenological analysis, qualitative content analysis, or ethnographic thick description. Quality assurance aims at credibility and transparency (e.g. transparent coding decisions, audit trail, triangulation of data/participants/methods, member checking, researcher reflexivity). Generalisability is not statistical but argued as "transferability”, for example through thick contextual description.

3.1.2 Quantitative Research - Measuring, Testing, Generalising ^ top 

Quantitative research tests hypotheses, estimates parameters, and examines relationships or differences based on numerical data. It primarily addresses "how often?”, "to what extent?”, or "is there an effect?”. Data collection includes standardised surveys, tests, measurement series, administrative/secondary data, or sensor-based measurements. Sampling is probabilistic (e.g. simple/stratified/cluster/multi-stage random sampling) to allow inference to the population.

Sample size planning (power analysis) and measurement quality (objectivity, reliability, validity) are central. Analyses include descriptive statistics, regression and variance models, scale analysis, causal models (e.g. difference-in-differences, instrumental variables, matching), as well as time-series and panel methods. Quality assurance relies on measurement accuracy, internal/external validity, replication, and sensitivity analyses. Generalisation is achieved through statistical estimation with confidence intervals and error control.

3.1.3 Mixed Methods - Integrating, Complementing, Validating ^ top 

Mixed-methods research combines qualitative and quantitative logics within a coherent design in order to pool strengths and reduce blind spots. Integration can take place at different levels: in the design (sequence/parallelism), in the methods (e.g. an embedded qualitative sub-study within a survey), and in interpretation (joint conclusions).

Common designs:

  • Explanatory Sequential
    Quantitative results first (e.g. survey, effect), followed by qualitative exploration to explain patterns.

  • Exploratory Sequential
    Qualitative exploration first for concept formation or instrument development, followed by quantitative testing and generalisation.

  • Convergent Parallel
    Parallel data collection, separate analyses, subsequent integration ("triangulation”) with a focus on convergence or complementarity.


3.1.4 Decision Criteria - Fit with the Research Question ^ top 

The choice of research approach must be closely tied to the research question. A clear match between the intended contribution to knowledge and the methodological implementation is essential for results to be meaningful and interpretable. It is therefore crucial to analyse the type of research question carefully and derive the most appropriate approach.

Exploratory questions, which ask how a phenomenon is experienced or why certain processes occur, are particularly suited to qualitative research designs. Here, the focus is on understanding meanings, perspectives, and contexts. Qualitative approaches make it possible to develop new concepts, capture complex interrelations, and highlight previously overlooked aspects. In mixed-methods designs, they can be used at the beginning of an investigation to generate hypotheses or prepare measurement instruments.

Testing questions, which ask whether a relationship exists, how strong an effect is, or whether groups differ significantly, require quantitative approaches. These allow hypotheses to be tested with the help of standardised measurements and statistical procedures. Quantitative designs are especially appropriate when results are to be generalised or when precise estimates for the population are needed. In mixed-methods research, they can be combined with qualitative findings to enrich numerical results with context and interpretation.

Questions focusing on the development or validation of instruments and concepts often benefit from a sequential mixed-methods approach. First, qualitative data are used to capture constructs precisely and identify suitable indicators. These indicators can then be quantitatively tested, scaled, and verified. This creates a close link between theoretical conceptualisation and empirical measurement.

In addition to the substantive orientation of the research question, further criteria come into play. These include access to data and participants - for instance, whether the entire population is reachable compared to selecting a few highly informative cases - the availability of resources such as time, budget, and technical infrastructure, and ethical feasibility, especially in sensitive contexts or with vulnerable groups. Finally, it must be considered in which form conclusions are to be drawn: should the aim be statistical generalisation to a population, or context-bound transferability based on transparent case analysis?

There is no universally "best” method. What matters is the fit with the research question, the theoretical framework, and the practical conditions of the study. Only through this deliberate alignment does the research approach become robust, transparent, and transferable.

3.1.5 Sampling, Data Collection, Analysis - Consequences of the Choice ^ top 

The choice of research approach has direct methodological consequences. Each orientation - qualitative, quantitative, or mixed methods - requires specific decisions regarding sampling, data collection, and analysis. These elements must be aligned consistently in order to create a coherent research design that delivers robust results and remains comparable within the academic discourse.

Sampling ^ top 

In qualitative research, smaller, purposefully selected samples are often used. The focus is less on statistical representativeness than on the informational richness of cases. Selection follows theoretical considerations, defined criteria, or the principle of maximum variation. The aim is to capture as broad a range of perspectives and experiences as possible until theoretical saturation is reached.

By contrast, quantitative approaches generally require larger samples drawn randomly from the population. This ensures that results can be generalised and statistically supported statements about the population can be made. Methods such as simple random sampling, stratified sampling, cluster sampling, or multi-stage sampling secure methodological rigour.

Mixed-methods designs combine these logics. For example, a targeted qualitative sample may first be used to develop concepts, followed by a large random sample to test them quantitatively. Conversely, a representative survey can be complemented by qualitative in-depth interviews to enrich numerical results with narratives and context.

Data Collection ^ top 

Qualitative data collection is usually open or semi-structured. Interviews, observations, or focus groups give participants space to express their perspectives and allow flexibility in the collection process. The aim is to capture meanings, processes, and subjective experiences as authentically as possible.

Quantitative data collection relies on standardisation. Questionnaires, tests, or measurement instruments are applied according to fixed rules to ensure comparability and measurement quality. Standardisation reduces the scope for interpretation but enables statistical analyses and hypothesis testing.

Mixed methods requires careful planning of interfaces: Should a qualitative interview deepen the results of a survey? Or do qualitative categories provide the basis for scales in a subsequent questionnaire? The decision about sequence and integration must be explicitly justified.

Analysis ^ top 

Qualitative analyses are usually iterative: data are interpreted step by step, concepts are developed, tested, and refined. Methods such as coding, content analysis, or discourse analysis emphasise reflexivity and theoretical sensitivity.

Quantitative analyses follow a pre-defined analysis plan. Statistical techniques such as regression, analysis of variance, or hypothesis testing are applied to quantify effects and relationships precisely. Adherence to quality criteria such as reliability, validity, and objectivity, as well as conducting robustness checks, is crucial.

Mixed-methods analyses focus on the integration of both strands. Results must not only be interpreted separately but also systematically brought together. This requires clear points of integration in the project timeline, for example by comparing, complementing, or linking findings.

An important tool here is the use of joint displays - visual presentations of qualitative and quantitative results in a shared table or figure. For example, survey scale values can be placed alongside illustrative interview quotations. In this way, relationships, agreements, or contradictions become immediately visible. Joint displays are particularly helpful when complex results need to be condensed while remaining accessible.

In addition, comparative interpretations are central. This concerns not presentation but the analytical process itself. Qualitative and quantitative results are systematically related to one another: researchers examine whether qualitative findings explain or challenge quantitative results - and whether statistical relationships can be illustrated through individual cases.

Example: If a survey indicates that students are mainly dissatisfied with acoustics, interviews may provide detailed descriptions of noise disturbances that contextualise the figures. Conversely, if the survey shows high satisfaction while interviews repeatedly highlight problems, comparative interpretation opens new hypotheses - for example, about different student groups or contexts of use.

Example Joint Display:

As­pect Quan­ti­ta­tive Find­ings (Scale Val­ues) Qual­i­ta­tive Find­ings (In­ter­view Quo­ta­tions)
Noise / Acous­tics 68% of stu­dents rat­ed the acous­tics ≤3 on a scale from 1 to 10 "In the group work rooms it is often so loud that it is hard to con­cen­trate."
In­door Cli­mate Av­er­age val­ue 5.2 out of 10, 25% dis­sat­is­fied "In win­ter it is draughty and in sum­mer much too hot - I often change rooms be­cause of this."
Fur­ni­ture 72% rat­ed seat­ing com­fort as ad­e­quate "The chairs are com­fort­a­ble, but for long study ses­sions I miss an er­go­nom­ic so­lu­tion."
Tech­ni­cal Equip­ment 80% sat­is­fied with Wi-Fi and pow­er out­lets "It’s great that there are sock­ets ev­ery­where, but some­times the Wi-Fi drops out when too many are on­line."

Example Comparative Interpretation:

The quantitative survey showed that 72% of students were generally satisfied with the furniture. On average, the score was 6.8 on a 1-10 scale. This result suggests that the majority perceived the seating as sufficient.

In the qualitative interviews, however, students repeatedly expressed criticism, especially in relation to longer study periods. A typical quotation was: "After two hours I get back pain - the chairs are not made for long sitting.”

A comparative interpretation of these findings shows: Although the overall rating is positive, there is a discrepancy between surface-level satisfaction and underlying problems. The quantitative survey provides an overall average that appears rather positive, while the qualitative statements reveal specific weaknesses that remain hidden in the numerical score.

From this, the hypothesis can be derived that while the furniture is adequate for shorter use, it shows clear deficits in longer-term study sessions. For the university, this means that measures should not primarily address the general level of equipment but instead focus on ergonomic improvements for extended study periods.

3.1.6 Common Misconceptions - Clarifications ^ top 

In practice, numerous misunderstandings circulate about the different research approaches. Such misconceptions can lead to inaccurate assessments or insufficiently justified methodological choices. A reflective approach to these misconceptions is therefore central to the quality of academic work.

Qualitative Research - not unscientific ^ top 

A common assumption is that qualitative research is less rigorous or less scientific because it does not rely on numbers. In reality, qualitative approaches are based on clear theoretical foundations, methodologically regulated procedures, and transparent interpretative steps. Their academic rigour lies not in standardisation but in the systematic analysis of meanings, processes, and contexts.

Quantitative Research - not automatically objective ^ top 

It is also often assumed that quantitative research is per se objective because it works with numbers. Yet every measurement and every statistical model is based on assumptions: about the construction of variables, the choice of scales, the validity of the instruments used, and the mode of data collection. Quantitative results also require critical reflection and must be assessed in terms of their quality.

Mixed Methods - more is not automatically better ^ top 

The combination of qualitative and quantitative methods is frequently seen as the "gold standard.” However, mixed methods are only meaningful when integration actually provides additional insights. A mere sequence of methods without a common research question or without systematic integration does not lead to better results and may instead create confusion and additional workload.

Sample Size - not an end in itself ^ top 

Sample size is not an isolated quality criterion. It must always be assessed in relation to the research question, the definition of the population, the chosen method, and the type of sampling procedure. Only then can results be meaningfully interpreted and generalised.

Larger samples do not automatically improve qualitative research. Here, the decisive criterion is theoretical saturation: cases are studied until no new relevant insights emerge. The goal is information density and in-depth analysis, not statistical representativeness.

In quantitative research, however, small samples are problematic because they do not allow reliable inferences about the population. Statistical tests require sufficient sample size to ensure stable estimates and meaningful confidence intervals and error margins.

Defining the population is of central importance. Only when the sample reflects the relevant characteristics of the target population - such as age, gender, educational background, or other dimensions relevant to the research question - can the results be generalised. A large sample is useless if it does not match the target group.

Example: If kindergarten children are surveyed about purchasing decisions, the sample may be large, but it does not represent the population of purchasing decision-makers.

Moreover, sample size calculators and formulas assume random selection. Only when participants are drawn using a randomised procedure can confidence intervals and error margins be calculated correctly. By contrast, if questionnaires are sent to an entire address list, this constitutes a formal census of the list, but the actual sample results from the responses received. These are based on self-selection, meaning that conventional sample size calculations cannot be applied directly.

Allocation of Methods - not rigid ^ top 

Another misconception is that certain data collection methods are automatically tied to one approach. Questionnaires are often considered "typically quantitative” because they may contain standardised scales. However, they can also be designed openly to capture narrative responses or assessments - thus falling into the qualitative domain. Similarly, interviews are often seen as classically qualitative. In fact, there are highly structured interview formats with closed questions and fixed response options that can be analysed quantitatively. What matters is not the method itself but how it is designed and for what purpose it is employed.

3.2 Pragmatic Decision Path - from Research Interest to Design ^ top 

The development of a research design is not a linear automatism but a deliberate decision-making process. Researchers must weigh different options against one another, justify their choices transparently, and keep the research process flexible. A pragmatic approach consists of translating the research interest step by step into a viable design.

  1. Refine the Research Question
    At the beginning stands the clear formulation of the research question. It determines whether an explanatory, a measuring, or a developing approach is appropriate. Questions about how and why tend to suggest qualitative or exploratory designs, while if and how strong point towards quantitative testing.

  2. Clarify the Theoretical Positioning
    Every study requires a theoretical framework. This serves to define relevant concepts and make mechanisms visible. Theory forms the basis for hypotheses, categories, or indicators, without which neither qualitative nor quantitative research can be conducted in a robust manner.

  3. Examine Data Availability
    A realistic view of available data sources is crucial. Some questions can only be answered if there is access to suitable participants, documents, or measurement instruments. The quality of the data - such as completeness, validity, or accessibility - must also be examined before the design is finalised.

  4. Assess Ethical and Resource-Related Feasibility
    Alongside substantive criteria, practical and ethical issues play a central role. Time constraints, financial resources, and participant burden set boundaries. Data protection, informed consent, and the avoidance of harm must likewise be ensured.

  5. Choose and Justify the Approach
    On the basis of the above considerations, the research approach is selected: qualitative, quantitative, or mixed methods. It is important to justify this choice - not out of habit or personal preference, but in close alignment with the research question and the logic of the intended conclusions.

  6. Derive Sampling, Data Collection, and Analysis Path
    Once the approach has been chosen, concrete decisions regarding sample selection, the design of data collection, and the form of analysis can be made. These elements must be coherently aligned to build a consistent research strategy.

  7. Plan Quality and Integration Strategy
    Already at the design stage, it should be considered how quality and robustness of findings can be ensured. In qualitative research this includes reflexivity, triangulation, and transparency of analytic steps. In quantitative research, reliability, validity, and objectivity are central. In mixed-methods designs, the additional question arises of how integration will be safeguarded, for example through joint displays or comparative interpretations.

  8. Pilot, Reflect, Adapt
    No research design is perfect from the outset. Pilot studies, pre-tests, or initial analyses help to identify weaknesses and make adjustments. Reflection and iterative refinement are an integral part of academic practice and contribute decisively to the quality of the final outcome.

Example ^ top 

This example illustrates how the pragmatic decision path can be applied step by step. From the initial research question through theoretical framing, data access, and feasibility to methodological design and quality assurance, a coherent research design emerges that delivers robust and practice-relevant results.

1. Refine the Research Question
The starting point is the question: How do students experience learning spaces, and why are certain areas avoided? Additionally of interest: How widespread are these problems, and which factors are statistically associated with satisfaction?

2. Clarify the Theoretical Positioning
The analysis draws on theories of learning environment research, concepts of "third places,” and models of user satisfaction. These serve to frame relevant dimensions such as spatial design, atmosphere, noise exposure, and social interactions.

3. Examine Data Availability
The university has access to various learning spaces and to students from different degree programmes. Qualitative data can be collected through observations and interviews; quantitative data can be gathered via a standardised online survey.

4. Assess Ethical and Resource-Related Feasibility
Consent is obtained for interviews and observations, and data protection policies are observed. The effort for transcriptions, questionnaire development, and statistical analysis is factored into the project plan.

5. Choose and Justify the Approach
A mixed-methods design is chosen: first, exploratory qualitative data collection for concept development; then a quantitative survey for testing and generalisation; finally, integration of both strands. Justification: Only in this way can lived experience and prevalence be brought together.

6. Derive Sampling, Data Collection, and Analysis Path
Qualitative sampling: purposive selection of students from different disciplines and usage profiles. Quantitative sampling: a larger random sample via an online survey. Instruments: semi-structured interviews, observation protocols, and a standardised questionnaire with scales. Analysis: qualitative content analysis combined with statistical procedures (e.g. factor analysis, regression models).

7. Plan Quality and Integration Strategy
Qualitative quality assurance: triangulation of observations and interviews, transparency in the coding process. Quantitative quality assurance: pre-test of the questionnaire, checks of reliability and validity. Integration: joint displays that place qualitative categories and quantitative findings side by side.

8. Pilot, Reflect, Adapt
Before the main project, a small pilot is conducted: one interview and a short questionnaire run. This tests question comprehensibility, technical feasibility, and time requirements. The results feed into the revision of the design.


4 Research Methods ^ top 

Research methods are the concrete procedures through which research questions are addressed and hypotheses are tested. While research design and research logic provide the overarching framework, methods describe the practical instruments of data collection and analysis. They are the tools that enable researchers to translate theoretical concepts into verifiable empirical findings.

A sound understanding of different methods is essential in order to choose the most suitable approach for a given research question. No method is inherently "better” or "worse” - its appropriateness always depends on the research question, the subject of investigation, the available resources, and the intended knowledge outcomes. Qualitative and quantitative methods, as well as mixed forms, complement each other and open different perspectives on reality.


4.1 Secondary Data Analysis ^ top 

Secondary data analysis refers to the systematic use and examination of data that have already been collected and are now applied to a new research question. In contrast to primary research, where researchers generate their own data, secondary analysis works with existing datasets. These may originate from a wide range of sources: official statistics from public authorities, standardised surveys by large research institutes, company databases, historical registers, digital research archives, or freely available open data portals.

The central characteristic of secondary data analysis is that the data were not originally collected for the current research question. Researchers therefore use them in a new context, critically examine their quality, completeness, and suitability, and interpret them in light of their own question. Secondary data analysis is thus an independent method that can enable both descriptive and analytical evaluations.

4.1.1 Areas of Application ^ top 

Secondary data analyses are an important instrument in many fields of research, as they allow existing data sources to be reused for new questions. They are particularly suitable when primary data collection would be too costly, too time-consuming, or methodologically difficult to realise.

  • Studying large-scale relationships:
    Secondary data make it possible to analyse broad structures, for example by using official statistics, company databases, or international comparative studies. In this way, trends, patterns, and differences can be captured that would not be visible through small-scale primary studies.

  • Analysing historical developments:
    Many data series are collected regularly and over long periods of time, enabling the reconstruction of developments over time. This makes it possible to empirically trace growth processes, shifts in demand, or technological transformations.

  • Comparisons across regions or institutions:
    Existing datasets allow systematic comparison of locations, organisations, or countries. This supports the identification of best practices as well as differences in structures, processes, or outcomes.

  • Exploratory research questions:
    Secondary data are well suited for hypothesis generation when the aim is to identify initial assumptions or patterns that can later be tested with primary data. They thus contribute to the development and refinement of research questions.

  • Re-use of research resources:
    Data from completed projects or publicly available archives can be re-analysed, providing additional insights without the need for new data collection.

4.1.2 Strengths and Weaknesses ^ top 

Strengths

  • Cost efficiency: no costs for new data collection, as the data already exist.
  • Time saving: immediate access to large datasets.
  • Coverage: access to extensive, often representative datasets that could not be collected independently.
  • Comparability: enables longitudinal analyses and international comparisons.

Weaknesses

  • Limited fit: data were not collected specifically for the current research question but for other purposes.
  • Restricted control: researchers have no influence over instruments, sampling, or data collection processes.
  • Data quality: possible errors, omissions, or biases cannot be corrected retrospectively.
  • Accessibility and data protection: not all relevant data are freely available or may be used without restrictions.

4.1.3 Common Misconceptions ^ top 

A widespread misconception is that secondary data are "objective” because they originate from official agencies or large institutions. In reality, such data are also shaped by their collection method, definitions, and categorisations.

It is also often assumed that secondary data analysis is straightforward because the data already exist. In fact, it requires a thorough examination of the data basis, a critical engagement with the logic of collection and measurement instruments, and methodological adaptation to the specific research question.

Another misconception is that secondary data are always free and easily accessible. In practice, many datasets are subject to charges or restricted by data protection regulations.


4.2 Experiment ^ top 

An experiment is a scientific research method designed to demonstrate causal relationships between variables. At its core lies the deliberate manipulation of one or more independent variables (e.g. learning method, drug dosage, pricing strategy), while the effect of this manipulation on a dependent variable (e.g. learning outcome, recovery rate, purchasing decision) is measured.

The key feature of an experiment is the controlled arrangement of conditions. Researchers create an artificial but controlled environment in which relevant factors can be systematically managed and confounding variables minimised or eliminated. Only in this way can observed changes in the dependent variable be attributed with high probability to the influence of the manipulated independent variable.

Central to the logic of experiments is the principle of comparison groups. Typically, an experimental group is exposed to the manipulation, while a control group is not. By comparing the results of both groups, it becomes possible to identify whether the manipulation produced an effect. Random assignment can also be applied to prevent systematic biases in group composition.

Experiments are considered in many disciplines to be the "gold standard” for testing hypotheses because they go beyond other methods: they show not only that two phenomena are related but also whether one causes the other. They are therefore particularly suitable when the research question concerns cause-effect relationships - an aim that observational or survey methods can only partially achieve.

At the same time, it is important to stress that experiments are not confined to the traditional laboratory setting. They can be conducted in real-world environments (field experiments), embedded in natural processes (quasi-experiments), or take place in digital contexts (e.g. A/B testing in online marketing). What unites all variants is the deliberate manipulation of conditions and the systematic observation of their effects.

Va­ri­ant Char­ac­ter­is­tics Strengths Weak­nesses
La­bo­ra­to­ry Ex­per­i­ment Con­trolled, arti­fi­cial en­vir­on­ment High in­ter­nal va­lid­i­ty, pre­cise meas­ure­ment Low ex­ter­nal va­lid­i­ty, un­nat­ur­al set­ting
Field Ex­per­i­ment Nat­ur­al en­vir­on­ment, real-world sit­u­a­tion High prac­ti­cal rel­ev­ance, real­is­tic con­di­tions Lim­it­ed con­trol of con­found­ers, lo­gis­tic­al ef­fort
On­line Ex­per­i­ment Dig­it­al en­vir­on­ment, A/B tests, ran­dom­ised Large sam­ple siz­es, low cost, rapid da­ta col­lec­tion Re­strict­ed to dig­it­al con­texts, de­pen­dent on tech­nic­al in­fra­struc­ture

An experiment is therefore a systematic manipulation under controlled conditions, aimed at empirically testing hypotheses about cause-effect relationships.

4.2.1 Areas of Application ^ top 

Experiments are most appropriate when the aim is to test whether a particular factor genuinely produces a cause-effect relationship with an outcome. They are not only useful for describing associations but particularly for testing hypotheses and strengthening causal explanations.

  • Testing causal relationships:
    Experiments enable researchers to deliberately manipulate single factors and measure their effects. For instance, changes in processes, technologies, or organisational frameworks can be analysed in terms of outcomes and behaviours.

  • Evaluating interventions:
    New programmes, strategies, or technical solutions can be tested for effectiveness in experimental designs. This allows researchers to assess whether intended improvements occur and whether unexpected side-effects arise.

  • Comparing alternative options:
    By investigating several conditions in parallel, different variants can be compared. This supports evidence-based decision-making, for example when selecting more efficient procedures or evaluating alternatives in social, economic, or technical contexts.

  • Generating practice-oriented evidence:
    Field and online experiments in particular produce insights that are directly transferable to real-world contexts. They enable decisions to be based on empirical evidence rather than solely on theoretical assumptions or models.

  • Advancing theory:
    By rigorously testing hypotheses, experiments not only answer practical questions but also contribute to the validation, refinement, or extension of scientific theories.

4.2.2 Strengths and Weaknesses ^ top 

Strengths

  • High internal validity: deliberate control allows causal conclusions.
  • Replicability: experiments can be repeated and verified.
  • Flexibility: experiments can be conducted in the laboratory, in the field, or online.

Weaknesses

  • Limited external validity: laboratory findings may not generalise to real-life situations.
  • Ethical restrictions: some questions cannot be investigated experimentally.
  • Resource intensity: experiments can be time- and labour-intensive.
  • Reactivity: participants may alter their behaviour when they know they are part of an experiment (Hawthorne effect).

4.2.3 Common Misconceptions ^ top 

A frequent misconception is that experiments must always take place in a laboratory. In fact, they exist in different forms: laboratory, field, and online experiments.

It is also mistaken to assume that experiments are automatically "objective.” Sampling, operationalisation, and interpretation all influence results.

Finally, it is often overlooked that experiments are not always the "best” method. They are highly suitable for testing causal hypotheses, but not necessarily the right choice for exploratory or descriptive questions.


4.3 Simulation ^ top 

Simulation and modelling are research approaches in which real systems or processes are reproduced in a simplified, abstracted form in order to study their behaviour under specific conditions. The main aim is to make complex interrelations comprehensible, measurable, and predictable.

  • Modelling
    Development of a model, i.e. a structured representation of reality. Models can be conceptual (e.g. flowcharts, theories), mathematical (e.g. equations, algorithms), or computer-based (e.g. software models). The key point is that they represent selected aspects of a system while deliberately excluding others. Every model is therefore a simplified, selective representation - never a complete reflection of reality.

  • Simulation
    Carrying out experiments on models. Simulations make it possible to examine how a system behaves when specific parameters are altered, or how it might develop under hypothetical conditions. They are therefore a means of generating knowledge through the controlled variation of model assumptions.

Models and simulations are closely linked: without a model there can be no simulation; without simulation a model remains a static representation. Only through dynamic testing does it become visible which consequences arise from specific inputs, conditions, or disruptions.

Procedure ^ top 

The implementation of simulations and modelling follows a sequence of methodological steps that ensure models are built transparently, tested reliably, and interpreted critically.

  1. Problem definition and research aim
    At the outset stands the precise formulation of the research question: What is to be investigated through the simulation? Equally important is the delimitation of the system under study. Not all aspects of reality can be represented, so decisions must be taken about which processes and variables are relevant.

  2. Model construction
    In the next step a model is created that reproduces reality in simplified form. This may be conceptual (e.g. flowcharts), mathematical (e.g. systems of equations), agent-based (e.g. simulations of decision-making), or physical-technical (e.g. heat flows). What matters is that assumptions and simplifications are documented explicitly, as they determine the explanatory power of the model.

  3. Data basis
    The data basis forms the foundation of every model and simulation. It serves to determine parameters realistically, define input values, and enable later calibration. Data can originate from very different sources - such as official statistics, research databases, case studies, technical measurements, or original empirical surveys. It is crucial to document the origin of the data, as this shapes the transparency a

  4. Calibration and validation
    A model is only robust if it is checked against real-world observations.

    • Calibration means adjusting the parameters so that the model accurately reproduces known states.
    • Validation examines whether the model also produces meaningful results under different conditions.
    • In addition, sensitivity analyses are often conducted: these test how strongly outcomes depend on changes in individual parameters.
  5. Simulation experiments
    At the core of model use are simulation experiments. Here, scenarios are deliberately tested by altering parameters, input values, or boundary conditions. The aim is to make possible developments visible and to assess how sensitive the model is to specific changes.

    A classic approach is to change only one parameter at a time while keeping all others constant. This allows the isolated effect of that single change to be analysed. Such experiments are particularly suitable for making causal relationships visible and for identifying which variables have the strongest influence on results.

    In many fields, however, this one-dimensional variation is not sufficient, since systems are shaped by complex interactions between several parameters. In such cases, methods are used in which multiple parameters are varied simultaneously:

    • Monte Carlo simulations: Random sampling of parameters within defined probability distributions to generate a wide range of possible scenarios. This makes it possible to calculate probabilities for specific outcomes and to make uncertainties visible.
    • Sensitivity analyses with multiple variation: Systematic combination of parameters in order to capture interactions and non-linear effects.
    • Scenario sets: Development of consistent scenarios that take into account different combinations of assumptions (e.g. prices, demand, political conditions).
  6. Interpretation and generalisability
    The results must be interpreted in light of the research question. Simulations do not provide certain predictions but illustrate possible development paths under specific assumptions. It is therefore essential to critically assess to what extent the findings can be generalised to other contexts and what limitations are imposed by the model assumptions.

  7. Documentation and replicability
    Every step must be documented carefully: Which assumptions were made? Which data and software were used? Which parameters were set? Only in this way can other researchers or practitioners trace, replicate, or further develop the simulation.

4.3.1 Areas of Application ^ top 

Simulations and modelling are particularly valuable when complex systems or processes need to be studied that are difficult or impossible to investigate directly in reality. They make it possible to design scenarios, test hypotheses, and support decisions on a solid evidence base.

  • Analysis of complex systems
    Simulations allow researchers to capture interactions in systems with many variables and feedback loops. Examples include infrastructures, energy systems, or organisational processes.

  • Forecasts and scenarios
    Models can be used to project possible future developments. By varying assumptions (e.g. prices, demand, resource availability), different scenarios can be designed and examined in terms of their consequences.

  • Policy and decision support
    Simulations offer decision-makers an instrument to test alternative courses of action without immediately implementing them in reality. This enables risks to be assessed, cost-benefit relations evaluated, and strategies optimised.

  • Planning and optimisation
    In both technical and organisational contexts, models are used to make planning more efficient. Applications range from optimising buildings (e.g. energy consumption, indoor climate) to designing logistics or production processes.

  • Research and theory development
    Modelling is not limited to practical use but also advances scientific knowledge. It enables systematic testing of hypotheses about mechanisms and the development of new theoretical concepts.

  • Education and communication
    Simulations are a vivid tool for explaining complex relationships. Through visual or interactive presentation, even non-specialists can follow developments and scenarios, which makes them highly relevant for science communication and teaching.

4.3.2 Strengths and Weaknesses ^ top 

Strengths

  • Analysis of complexity: Simulations make it possible to understand systems with many variables, interactions, and feedbacks that would be almost unmanageable in reality.
  • Risk-free experimentation: Hypothetical scenarios can be tested without incurring real costs, dangers, or ethical issues. This is especially valuable in areas where real-world experiments would be impossible or irresponsible.
  • Forecasting capacity: Models allow the calculation of future scenarios, such as the development of markets, resource consumption, or organisational processes.
  • Decision support: Simulations enable decision-makers to weigh different options and act on the basis of evidence.
  • Transparency and traceability: Well-documented models clearly show which assumptions were made. This allows hypotheses to be examined and replicated by other researchers.
  • Flexibility: Models can be continuously adapted and extended as new data or insights become available.

Weaknesses

  • Dependence on assumptions: Every model is a simplified representation of reality. Its validity depends heavily on the quality and plausibility of the underlying assumptions. Inaccurate or overly simplified assumptions lead to distorted results.
  • Validity problems: Even complex models only represent parts of reality. It must always be critically examined whether findings are transferable to real processes.
  • High resource demand: Developing, calibrating, and validating models is often time- and resource-intensive. The more complex the model, the greater the need for expertise and computing capacity.
  • Risk of misinterpretation: Simulation results can give the impression of exactness, even though they are based on assumptions and simplifications. There is a danger of overinterpretation or uncritical acceptance as "objective".
  • Data dependency: Models require high-quality input data. If the data are incomplete, unreliable, or biased, the model will only reflect a limited reality.
  • Communication barriers: The more complex a model is, the harder it becomes to explain its functioning and limitations to non-specialists.

4.3.3 Common Misconceptions ^ top 

A frequent misconception is to regard simulations as direct representations of reality. In fact, models are always simplified, selective representations that highlight certain aspects while leaving others aside. Results are therefore not "the truth" but approximations based on chosen assumptions.

Another widespread belief is that greater complexity automatically produces better models. In practice, overly complex models can become confusing, difficult to validate, and hard to communicate. Good models are characterised by capturing the essential elements while remaining manageable.

A further misunderstanding is the assumption that simulations are inherently objective. Even though they are based on mathematical or technical procedures, models always reflect the perspectives and decisions of the researchers: Which variables are included? Which assumptions are made? Which data sources are used? These decisions have a decisive influence on the results.

It is also often overlooked that simulations have little value without valid input data. Even the most sophisticated model is only as robust as the data on which it is based. If uncertain, incomplete, or biased data are used, the results may be misleading.

Finally, there is sometimes the mistaken expectation that simulations can predict the future. Simulations do not provide certain forecasts but rather scenarios that illustrate what is likely under specific conditions. They are tools for exploration and decision support, not instruments of deterministic prediction.


4.4 Case Study ^ top 

The case study is a research method that focuses on the in-depth and comprehensive examination of a single case or a small number of selected cases. A "case" is understood as a clearly defined unit that can be subjected to systematic analysis. Such a unit may be an individual, a group, an organisation, an event, a place, a process, or even documents. The central idea of the case study is to investigate a phenomenon in its natural setting and full complexity, rather than reducing it to a few isolated variables.

Case studies are characterised by a holistic perspective. This means that not only individual aspects of the case are examined, but that interactions, contexts and framework conditions are also considered in the analysis. In contrast to experimental designs, which aim at high internal validity through the control of variables, the case study seeks depth of understanding, contextual sensitivity and a nuanced reconstruction of real processes.

The methodological foundation of case studies is the use of multiple data sources. Interviews, observations, document analyses and statistical data can be combined in order to examine the case from different perspectives. This process of data triangulation increases the credibility of the findings and helps to create the most comprehensive picture possible.

A particular strength of the case study lies in its ability to capture complex social and organisational phenomena that cannot be adequately addressed by standard quantitative instruments. Case studies are therefore especially suitable for research questions that require a deeper understanding of processes, actions, meanings and structures. They are also highly relevant for theory development: through the intensive analysis of one or a few cases, new hypotheses may emerge, existing theories may be refined or contextualised, and previously overlooked connections may be revealed.

From an epistemological perspective, the case study is characterised by a close link between empirical research and theory. It often operates in the tension between inductive and abductive logic: on the one hand, empirical observations are used to develop theories, while on the other, existing concepts and theoretical approaches are applied and further developed in the analysis.

Case Selection ^ top 

The case study does not aim for statistical representativeness. Its generalisability is instead based on analytical generalisation: the findings of a single case can be generalised in theoretical terms when they are related to existing concepts. The value of the case study therefore lies less in its quantitative scope than in the depth with which it contributes to the understanding of social reality. For academic work, this means that the selection of the case (e.g. a company, project or organisation) must be described in a transparent and methodologically sound way. This includes:

  • Explaining why this particular case is especially suitable with regard to the research question and theoretical framework
  • Justifying whether it is a typical case, representing a general pattern, or a special/extreme case that allows for specific insights
  • Briefly outlining other cases considered and the reasons for rejecting them
  • Reflecting on possible influencing factors (e.g. the researchers’ closeness to the field) and critically assessing them

This makes it clear that the selection is not random or solely motivated by access to data, but is based on academic criteria. Such transparency strengthens the credibility and significance of the case study.

Example formulations for describing case selection in academic work

  • The selection of the company under investigation was based on its relevance to the research question, as it exemplifies medium-sized enterprises in sector X and thus represents a typical case.

  • The decision in favour of this project was made because it represents a particularly striking example of the implementation of sustainability strategies and thus serves as an extreme case, providing specific insights.

  • Several potential organisations were considered in the selection process. Ultimately, Organisation A was chosen, as comprehensive access to data (interviews, internal documents, publicly available information) was possible here, while other options were rejected for methodological reasons.

  • The researchers’ proximity to the field was reflected upon. To avoid possible bias, the choice of case was motivated by its fit to the theoretical research question rather than by personal accessibility.

  • The case study was selected to allow for analytical generalisation. The chosen case represents a typical manifestation within the sector and contributes to examining theoretical concepts in a real-world context.

Anonymisation ^ top 

Equally important is anonymisation. While publicly available data - such as published annual reports, press releases or court rulings - can usually be mentioned by name, internal or sensitive information must be protected. The degree of anonymisation depends on the balance between academic transparency and confidentiality.

Original Anonymised version
XY Ltd., Innsbruck, annual turnover €42m “medium-sized manufacturing company in Western Austria, turnover range €40-50m”
IT service provider TechSolutions, Vienna large IT service provider in an urban area
Location Salzburg a site in Western Austria

The decision on whether to disclose or anonymise a case must be justified in the academic work. Typical formulations include:

  • Justification for disclosure:
    The company under investigation is named, as all information used originates from publicly available sources (annual reports, press releases). Permission to use this data was obtained.

  • Justification for anonymisation:
    The company under investigation is anonymised, as internal documents and confidential interview data are included in the analysis. To ensure academic transparency, industry, company size and region are specified, while the name and exact location are pseudonymised.

In this way, the methodological decision is made transparent while ensuring both academic quality and ethical standards.

4.4.1 Applications ^ top 

Case studies are employed when complex phenomena need to be examined in their entirety and when an in-depth understanding of structures, processes, and meanings is required. They are particularly appropriate when the interaction of different factors and their contextual conditions, rather than isolated variables, is to be analysed.

  • Analysis of complex systems and organisations:
    Case studies make it possible to capture processes, decision-making, or interactions within a specific system in detail. This is especially useful when structures and dynamics cannot be fully captured through standardised procedures.

  • Investigation of real-world contexts:
    Case studies are helpful when a research subject should not be artificially isolated but instead studied in its actual environment. They reveal how technical, economic, and social conditions are intertwined.

  • Exploration of processes and developments:
    Case studies can reconstruct long-term change processes, innovation trajectories, or organisational transformations. This provides insights into causes, mechanisms, and consequences that cross-sectional surveys may overlook.

  • Comparison and contrast:
    By examining multiple cases, similarities and differences can be systematically identified. This allows the generation of hypotheses or the testing of existing theories in different contexts.

  • Practice-oriented insights:
    Case studies offer concrete findings that can be directly applied in practice. They illustrate how theoretical concepts operate in real settings and which factors determine success or failure.

  • Theory development:
    Beyond their practical relevance, case studies contribute to advancing scientific theories. Through the detailed analysis of one or a few cases, new concepts can emerge or existing models can be critically reviewed and refined.

4.4.2 Strengths and Weaknesses ^ top 

Strengths

  • Depth of understanding: Case studies provide detailed insights into complex phenomena.
  • Context sensitivity: The specific conditions and interactions of a case are made visible.
  • Data variety: By using different sources (interviews, documents, observations), a multi-layered picture emerges.
  • Theory development: Case studies are valuable for generating new hypotheses and refining existing theories.
  • Practical relevance: Findings are often directly applicable to concrete fields of practice.

Weaknesses

  • Limited generalisability: Findings refer to specific cases and are not statistically representative.
  • Researcher dependency: Interpretation and weighting of data require high reflexivity, as subjectivity may influence results.
  • Resource intensive: Detailed data collection and analysis demand significant time and effort.
  • Risk of overload: Extensive data volumes can be difficult to structure and analyse.

4.4.4 Common Misconceptions ^ top 

A common misconception is that case studies are automatically representative of a larger population. In fact, the aim is not statistical generalisability but analytical generalisation. Findings can be transferred to theoretical concepts but not directly to all comparable cases.

It is also often assumed that a case study is "only descriptive". In reality, it is a systematic research method that is scientifically grounded through structured data collection, triangulation, and theoretical embedding.

Another misconception is that case studies can be chosen arbitrarily. The selection of a case must be well justified, for example on the basis of relevance, informational value, or theoretical interest. An unreflective choice greatly reduces the explanatory power.


4.5 Systematic Review ^ top 

A systematic review, internationally often referred to as a Systematic Review, is a scientific method that aims to compile the current state of research on a clearly defined question in a comprehensive, transparent, and methodologically verifiable way. In contrast to narrative or selective literature reviews, which are often influenced by subjective selection, a systematic review follows a structured, documented, and reproducible procedure.

Key steps include the precise formulation of the research question, the development of a systematic search strategy (including the selection of suitable databases and search terms), the definition of clear inclusion and exclusion criteria, as well as the critical appraisal of the studies found. All decisions taken in the process must be documented in a transparent manner, so that other researchers can review or replicate the procedure.

In this context, sources are not general textbooks or reference works. Systematic reviews typically draw on primary scientific studies that contain original empirical data or theoretical models. These studies form the foundation of systematic reviews, as they provide concrete findings or concepts that can be assessed methodologically and related to one another. Textbooks or handbooks may serve for orientation, but they are secondary sources and therefore not the primary subject of a systematic review.

The purpose of a systematic review is to produce a complete and unbiased picture of the state of research. This includes not only compiling the main findings, but also critically considering the methodological quality of the included studies, comparing results, and identifying research gaps. In this way, systematic reviews contribute significantly to organising and consolidating existing knowledge and to providing a sound basis for future research projects.

Note on terminology
In many disciplines, the term meta-analysis is also used. This refers to a specific quantitative procedure applied within a systematic review when several studies provide comparable data that can be statistically combined. A meta-analysis is therefore not a stand-alone method, but a particular tool used as part of a systematic review.

Implementation ^ top 

For conducting systematic reviews, the PRISMA approach (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) has become the established standard. It ensures that the entire process is traceable, transparent, and verifiably documented.

  1. Identification

    • Development of a comprehensive search strategy with clearly defined search terms.
    • Search across several academic databases as well as grey literature.
    • Documentation of search paths, databases, and number of hits.
  2. Screening

    • Initial review based on titles and abstracts.
    • Application of broad inclusion and exclusion criteria (e.g. language, publication period, type of publication).
    • Removal of duplicates.
  3. Eligibility

    • Full-text review of the remaining studies according to precise inclusion and exclusion criteria.
    • Typical exclusion criteria: insufficient methodological quality, unsuitable population, lack of relevance to the research question, purely theoretical papers in an empirical context, or vice versa.
  4. Inclusion

    • Final selection of studies to be included in the analysis.
    • Documentation of the number of included studies and the reasons for exclusions.
  5. Data Extraction

    • Systematic recording of relevant information from the included studies (authors, year, study design, sample, methods, key findings).
    • Use of standardised tables or data extraction forms.
  6. Quality Assessment / Risk of Bias

    • Critical appraisal of the methodological quality and validity of the included studies.
    • Application of standardised assessment tools (e.g. checklists for study designs, bias assessment criteria).
  7. Synthesis

    • Integration of results, either narratively (qualitative) or quantitatively in the form of a meta-analysis.
    • Presentation of key patterns, differences, and research gaps.
  8. Reporting

    • Visualisation of the process in the PRISMA flow diagram, showing the number of studies identified, excluded, and included, along with reasons for exclusion.
    • Clear presentation of all methodological decisions to ensure transparency and replicability.

4.5.1 Areas of Application ^ top 

Systematic reviews serve to organise and critically assess the existing body of research and are therefore a key instrument when it comes to capturing the current state of knowledge on a specific question. They are particularly useful when research findings are widely scattered, partly contradictory, or methodologically heterogeneous, and when a structured synthesis is required.

  • Stocktaking of the state of research:
    Systematic reviews provide a transparent overview of which studies exist on a topic, which questions have already been examined, and what results have been obtained. This creates orientation for subsequent research projects.

  • Synthesis and comparison of findings:
    Different studies often yield partly contradictory results. A systematic review highlights these, places them side by side, and identifies both commonalities and differences. In this way, reliable conclusions can be drawn.

  • Assessment of methodological quality:
    Systematic reviews evaluate the methodological soundness of the included studies. This makes it possible to better judge the significance of findings and to identify potential biases or methodological weaknesses.

  • Identification of research gaps:
    Through systematic structuring, it becomes clear in which areas sufficient evidence exists and where further research is needed. This serves as an important basis for developing new projects.

  • Development of theoretical and practical guidance:
    Systematic reviews not only provide an overview, but also create a foundation for critically examining existing theories or formulating practical recommendations based on a broad body of empirical evidence.

  • Support for decision-making:
    In many contexts, systematic reviews are used to inform decisions - whether in planning, management, or policy-making. They provide robust evidence that goes beyond individual studies.

4.5.2 Strengths and Weaknesses ^ top 

Strengths

  • Transparency: Clear criteria and structured procedures enhance traceability.
  • Clarity: Systematic summaries allow rapid access to extensive fields of research.
  • Evidence building: Quantitative syntheses (e.g. meta-analyses) can pool effects and strengthen the reliability of findings.
  • Error control: Biases or weaknesses of individual studies can be put into perspective within the overall review.
  • Research guidance: Results indicate where further research is needed and which questions have already been adequately addressed.

Weaknesses

  • Effort: Systematic reviews require time-intensive searches, selection processes, and analyses.
  • Dependence on data: The quality of the review depends on the quality and availability of primary studies.
  • Publication bias: Studies with significant results are published more frequently, which may distort systematic reviews as well.
  • Complexity: Differences in study designs, measurement tools, or populations make direct comparisons more difficult.
  • Limited generalisability: Systematic reviews are bound by the strength and scope of the underlying studies.

4.5.3 Common Misconceptions ^ top 

A common misconception is that a systematic review is merely an "extended literature review." In fact, it differs from unsystematic reviews through a clearly defined methodology, documented selection criteria, and transparent analytical steps.

It is also often assumed that quantitative syntheses such as meta-analyses are "objective" because they use statistical methods. In reality, their reliability strongly depends on the quality of the included studies. Weak or biased primary studies can lead to misleading overall results.

Another misunderstanding is that systematic reviews are only relevant for quantitative research. Qualitative studies can also be systematically synthesised, for instance through meta-syntheses that integrate concepts, theories, or interpretations across several studies.

Finally, it is often overlooked that even a comprehensive systematic review is never "final." It reflects the state of research at a given point in time, which may change with the publication of new studies and data.


4.6 Questionnaire Survey ^ top 

The questionnaire survey is one of the most widely used methods of data collection in the social, economic, and technical sciences. Its purpose is to systematically gather information from a larger number of participants. Questionnaires consist of a series of questions or items that are presented in a standardised format. This makes it possible to compare responses across participants and use them for further analyses.

Types of Questions ^ top 

Questionnaires can include both quantitative and qualitative elements:

  • Quantitative: standardised questions with predefined response categories (e.g. scales, multiple choice). The aim is to analyse frequencies, correlations, or differences statistically.

  • Qualitative: open questions where respondents can formulate their answers freely. The aim is to capture subjective perceptions, interpretations, and explanations in detail.

Que­sti­on Type Cha­rac­te­ris­tics Areas of Ap­pli­ca­tion Ad­van­tages Chal­len­ges and Risks
Open Ques­tions Res­pon­ses are for­mu­la­ted free­ly; no pre­de­fi­ned re­spon­se ca­te­go­ries Cap­tu­ring sub­jec­tive im­pres­sions, in­di­vi­du­al as­sess­ments, or as­pects not pre­vious­ly known to re­sear­chers High in­for­ma­tio­nal depth, new per­spec­tives, par­tic­u­lar­ly use­ful for ex­plo­ra­to­ry stu­dies Time-con­su­ming ana­ly­sis, lim­i­ted com­pa­ra­bi­li­ty, risk of am­bi­gu­ous re­spon­ses
Closed Ques­tions Pre­de­fi­ned re­spon­se op­tions; re­spon­dents choo­se from a list Mea­sur­ing fre­quen­cies, dis­tri­bu­tions, and re­la­tions; stan­dar­dised sur­veys High com­pa­ra­bi­li­ty, easy to ana­lyse, ef­fi­ci­ent for lar­ge sam­ples Low in­for­ma­tio­nal depth, re­spon­ses lim­i­ted to pre­set ca­te­go­ries
Sca­led Ques­tions (e.g. Likert sca­les) Re­spon­dents rate a state­ment on a gra­du­al sca­le (e.g. from strong­ly dis­a­gree to strong­ly a­gree) Mea­sur­ing atti­tudes, sa­tis­fac­tion, ac­cept­ance, or per­cep­tions Pro­du­ces dif­fe­ren­tia­ted da­ta that can be ana­lysed sta­tis­ti­cal­ly, high de­gree of stan­dar­di­sa­tion Sca­les must be pre­ci­se­ly con­struc­ted, risk of ten­den­cy to­wards neu­tral mid­point choi­ces
Se­mi-open Ques­tions (hy­brid) Pre­de­fi­ned re­spon­se op­tions with the pos­si­bi­li­ty to add a self-for­mu­la­ted an­s­wer Use­ful when stan­dar­di­sa­tion is de­si­red but open re­spon­ses may add va­lue Com­bines com­pa­ra­bi­li­ty with flex­i­bi­li­ty In­crea­sed ef­fort in ana­ly­sis due to ad­di­tio­nal open re­spon­ses
Fil­ter and Con­tin­gen­cy Ques­tions Di­rect re­spon­dents to dif­fe­rent fol­low-up ques­tions de­pend­ing on their an­s­wers Avo­ids ir­re­le­vant ques­tions, in­crea­ses sur­vey ef­fi­ci­en­cy In­di­vi­du­al­ly ad­just­ed sur­veys, re­du­ces re­spon­dent frus­tra­tion Com­plex ques­tion­naire de­sign, in­crea­sed risk of er­rors in im­ple­men­ta­tion and ana­ly­sis
Pro­jec­tive Ques­tions Ques­tions asked in­di­rect­ly, e.g. through sce­na­ri­os, im­ages, or hy­po­the­ti­cal si­tua­tions Cap­tu­ring un­der­ly­ing atti­tudes, mo­ti­va­tions, or va­lues that re­spon­dents may not ex­press di­rect­ly Un­co­vers hid­den per­spec­tives, re­du­ces so­ci­al de­sir­a­bi­li­ty bias Dif­fi­cult to ana­lyse, high de­gree of in­ter­pre­ta­tion, re­qui­res well-de­signed ques­tions
Scales ^ top 

Scales are a central instrument in standardised questionnaires, as they make it possible to capture attitudes, perceptions, or behaviours not only dichotomously (yes/no), but also in gradations. Instead of restricting respondents to a simple choice, scales allow for more nuanced assessments and thus enable finer analysis.

Commonly used types of scales include:

  • Likert scales: Respondents indicate the extent to which they agree with a statement (e.g. from "strongly disagree" to "strongly agree"). Likert scales are widely used because they translate attitudes into quantifiable data.
  • Rating scales: Evaluations are made on a scale of numerical or verbal steps (e.g. 1-10, very poor to very good). They are often used to measure satisfaction or the intensity of a perception.
  • Semantic differential scales: Respondents assess objects or concepts between pairs of bipolar adjectives (e.g. "modern - traditional", "practical - impractical"). They are useful for capturing complex perception profiles.
  • Visual analogue scales (VAS): Responses are marked on a continuous line between two extremes (e.g. pain perception between "no pain" and "worst imaginable pain"). This method is particularly applied when high sensitivity to subtle differences is required.

Advantages of scales

  • They increase measurement accuracy by capturing not only the existence of an attitude but also its strength.
  • They are suitable for statistical analysis and allow the calculation of means, variances, or correlations.
  • They enable comparisons between groups or over time.

Challenges of scales

  • The wording must be precise, as unclear anchor points (e.g. "rather agree") may be interpreted differently.
  • The number of scale points influences the results: a few points (e.g. 3 or 4) simplify responses, while many points (e.g. 10 or more) allow more differentiation but may overwhelm respondents.
  • Cultural differences play a role: in some cultures extreme values are avoided, while in others they are preferred.
  • Scales are susceptible to response tendencies, such as central tendency (preference for neutral answers) or acquiescence bias (general tendency to agree).
  • It must also be noted that a numerical response alone does not automatically explain what a given number means to respondents. For example, a score of "2" on a satisfaction scale indicates low approval, but it does not reveal whether this is due to missing facilities, unfavourable conditions, or personal expectations. For deeper insight, a combination with open questions or additional methods is often advisable.
Implementation ^ top 

Questionnaires can be administered in written form (paper), digitally (online surveys), or orally (structured interviews). What matters most is clear structuring, comprehensible language, and alignment with the research question.

The quality of a questionnaire survey largely depends on the care taken in formulating and testing the questions. Even small ambiguities or unnecessary items can considerably reduce the validity of the data.

Where possible, questions should not be developed entirely anew but should draw on established templates from the research literature or existing studies. This ensures that formulations have already been tested and that evidence of validity and reliability is often available. When questions are newly developed, they must be closely aligned with the research question and underlying theoretical concepts.

Principles of wording

  • Clarity: Questions must be unambiguous, easy to understand, and precise.
  • Neutrality: Leading or value-laden formulations must be avoided.
  • Relevance: Only questions that genuinely contribute to answering the research question should be included. This is particularly important for personal data - it must be checked whether their collection is truly necessary.

Data protection and ethical aspects
Particularly strict requirements apply when collecting personal data. The General Data Protection Regulation (GDPR) of the European Union stipulates that only data strictly necessary for the research purpose may be collected (principle of data minimisation). Researchers must therefore always examine whether information is indispensable for answering the research question or whether it can be omitted.

A central risk lies in the fact that seemingly anonymous data may, in combination with other variables, allow re-identification of individuals. For example, if age, gender, department, and place of residence are collected together, it may be possible to identify participants in small samples or clusters. This would violate the basic principles of the GDPR, which aim to protect privacy and prevent data from being traced back to individuals.

Particular caution is required when dealing with sensitive data (e.g. relating to health, religion, or political orientation). According to the GDPR, these may only be collected under the strictest conditions, such as explicit consent and clearly defined purposes. Even for less sensitive data, it is essential to consider whether the information is really needed for the analysis or whether anonymisation or aggregation (e.g. age groups instead of exact date of birth) is sufficient.

The GDPR also requires transparency: participants must be informed in clear and accessible language about what data is collected, for what purpose it is used, and how long it will be stored. They must also have the right to withdraw consent and request the deletion of their data.

Pretests
Before being used in the main study, questionnaires should be tested in a pretest. This helps determine whether questions are understandable, whether response categories are appropriate, whether technical functions work properly (e.g. in online surveys), and whether the completion time is reasonable. Pretests make it possible to identify and address ambiguities or technical problems at an early stage.

Handling incomplete responses
A common misconception is that incomplete questionnaires must always be excluded. In fact, a differentiated approach is required:

  • If only a few questions are missing, the remaining data may still be valuable.
  • In some cases, it may even be appropriate to impute missing values statistically.
  • Exclusion should only occur if central variables are missing or if response patterns are evidently random or contradictory.

4.6.1 Areas of Application ^ top 

Questionnaire surveys can be used in a wide range of contexts where information on attitudes, opinions, experiences, or behaviours is required. They are particularly suitable when:

  • Larger groups need to be surveyed systematically in order to identify patterns and trends.
  • Comparisons are to be made between different groups, organisations, or points in time.
  • Subjective assessments such as satisfaction, acceptance, or perceptions are to be captured.
  • Hypotheses need to be tested or exploratory questions clarified.
  • Practice-oriented information is required to support decisions in organisations, planning processes, or projects.

Questionnaires are therefore a flexible instrument that can be used both for descriptive stocktaking and for analytical hypothesis testing.

4.6.2 Strengths and Weaknesses ^ top 

Strengths

  • Wide reach: Surveys can cover large groups at comparatively low cost.
  • Standardisation: Asking all participants the same questions enables comparability.
  • Versatility: The combination of open and closed questions allows for both quantitative and qualitative analysis.
  • Efficiency: Online surveys in particular are quick to administer and easy to evaluate.

Weaknesses

  • Limited depth: Standardised questionnaires provide less detail than open interview formats.
  • Response bias: Social desirability or lack of motivation may distort answers.
  • Response rate: In voluntary online surveys, willingness to participate is a critical factor.
  • Comprehension issues: Ambiguous or unclear questions lead to misunderstandings and reduce data quality.

4.6.3 Common Misconceptions ^ top 

A widespread misconception is that questionnaires are automatically a "quantitative" method. In fact, it depends on the design whether data are analysed numerically or interpreted qualitatively.

It is also often assumed that a large number of responses automatically leads to valid results. What really matters is whether respondents are representative of the target population and whether the sample is methodologically well-founded.

Another misconception concerns the handling of incomplete responses. It is often assumed that questionnaires with missing data must always be excluded. In practice, however, a more differentiated approach is advisable:

  • If only a few questions are missing, the remaining data can still be used.
  • Missing values can sometimes be statistically imputed under certain conditions.
  • Exclusion should only occur if central variables are missing or if response patterns are evidently random or contradictory.

It is also frequently believed that online surveys are inherently easier and better than traditional methods. While they are cost-effective and fast, they require careful design, technical safeguards, and targeted monitoring of response rates to avoid bias.
Finally, it is often overlooked that questionnaires are only as good as their design. Without precise wording, logical structure, pretesting, and strict adherence to data protection requirements, even large-scale surveys cannot produce robust results.


4.7 Interview ^ top 

Interviews are among the central methods of qualitative research and exist in different forms:

In­ter­view Type Cha­rac­te­ris­tics Pur­po­se of Use Ad­van­tages Li­mi­ta­tions / Risks
Struc­tu­red In­ter­view All ques­tions are pre­de­fi­ned; or­der and word­ing al­ways re­main the same Com­pa­ri­son across many re­spon­dents; of­ten used for quan­ti­ta­tive ana­ly­sis High de­gree of stan­dar­di­sa­tion, good re­pli­ca­bi­li­ty, ef­fi­cient for lar­ge sam­ples Low flex­i­bi­li­ty, no ad­ap­ta­tion to in­di­vi­du­al re­spon­ses, risk of su­per­fi­ci­al da­ta
Semi-struc­tu­red / Guide-based In­ter­view In­ter­view guide with core top­ics; or­der and depth may va­ry Com­bines struc­ture with flex­i­bi­li­ty; suit­able for many qua­li­ta­tive re­search ques­tions Good bal­ance be­tween com­pa­ra­bi­li­ty and in­di­vi­du­al ex­plo­ra­tion; pos­si­bi­li­ty to pro­be De­pend­ent on the skills of the in­ter­view­er; ana­ly­sis is time-con­su­ming
Un­struc­tu­red / Nar­ra­tive In­ter­view No fixed ques­tions; open con­ver­sa­tion; top­ic is shaped by the re­spon­dents Ex­plo­ra­to­ry stu­dies; gain­ing in-depth in­sights and per­so­nal sto­ries Max­i­mum flex­i­bi­li­ty; open to new and un­ex­pec­ted as­pects; en­a­bles deep un­der­stan­ding Low com­pa­ra­bi­li­ty; high de­pend­ence on re­search com­pe­tence; ana­ly­sis very time-con­su­ming
Fo­cus Group In­ter­view Dis­cus­sion with sev­er­al peo­ple at the same time; mo­de­rat­ed by re­sear­chers Cap­tu­ring opin­ion for­ma­tion, group dy­nam­ics, and col­lec­tive per­spec­tives Ef­fi­cient way to gain many per­spec­tives in a short time; open to dis­cus­sion; makes so­cial pro­ces­ses vi­si­ble In­di­vi­du­al per­spec­tives move to the back­ground; strong per­so­na­li­ties may dom­i­nate; mo­de­ra­tion-in­ten­sive

While structured interviews primarily ensure quantitative comparability and unstructured interviews provide maximum openness for individual narratives, the guide-based (semi-structured) interview represents a methodological middle ground. It combines the necessary thematic structure with the flexibility to respond to individual answers and to allow for in-depth exploration. Owing to this balance between comparability and openness, the guide-based interview is one of the most frequently used forms of qualitative research and forms the focus of the following discussion.

An interview guide contains the central topics and questions that should be addressed in all conversations. At the same time, interviewers retain the flexibility to react to answers, ask follow-up questions, and explore interesting aspects in more depth. Unlike standardised questionnaires, the questions do not need to be formulated in such a way that they can be answered without context or probing. Instead, the interview thrives on interaction, which allows researchers to uncover individual perspectives in a more differentiated way.

Implementation ^ top 

Conducting guide-based interviews requires careful preparation, which must be both methodologically sound and practically feasible.

Formulation of questions
The quality of guide-based interviews depends largely on the care taken in developing and structuring the questions. A fundamental principle is that questions must be clear, open, and thematically focused. They should provide respondents with sufficient space to present their views, experiences, and reasoning without being restricted by narrow response categories.

The development of questions should always be guided by the research question: each guiding question must have a clear link to the research aim and ensure that the data gathered in the interview make a genuine contribution to answering the research question. To this end, it is useful to break down the research question into thematic sub-aspects and transfer these into the interview guide.

An important step is to draw on existing literature and established instruments. Frequently, questions from earlier studies can be adopted or adapted, which enhances comparability and ensures theoretical grounding. Where such templates are not available, new questions may be developed, but they should remain closely tied to theoretical concepts and clearly defined terms.

The interview guide itself typically contains overarching guiding questions for each thematic block. These serve as "entry anchors” and ensure that all relevant content is addressed. Supplementary sub-questions or potential probes can also be prepared in order to deepen responses, request examples, or clarify ambiguous statements. Thus, the interview guide is not a rigid script, but rather a structured orientation tool that allows for flexible yet systematic conversation management.

Unlike standardised questionnaires, questions in guide-based interviews do not have to be unambiguous without additional explanation or follow-up. Instead, openness and interactivity are integral to the methodical design. Nevertheless, all key thematic areas of the guide must be systematically covered in each interview to guarantee a minimum level of comparability

Data protection and anonymisation
As with all qualitative methods, the principle of data minimisation under the General Data Protection Regulation (GDPR) also applies here. Only data directly required for the research question may be collected. Participants’ informed consent must be obtained and documented transparently. Interviews are often recorded (audio or video) and then transcribed; in such cases, it must be ensured that the data are stored in encrypted form and accessible only to authorised persons. In publications, all personal data must be anonymised - this includes not only names but also indirectly identifying information such as organisations, positions, or specific projects if these could enable conclusions about individuals.

Selection of interview participants
The selection of respondents is a key methodological step, as it strongly influences which perspectives become visible in the research process. While quantitative studies often aim for representative random samples, the selection logic in guide-based interviews is different. The focus is not primarily on statistical representativeness but on deliberately choosing individuals who can contribute relevant knowledge, experience, or viewpoints to the research question.

Selection is always guided by the research interest:

  • For exploratory questions, participants are sought who can cover the widest possible range of perspectives.
  • For hypothesis-testing or theory-driven questions, the selection is often based on characteristics relevant to the assumptions under investigation.

Different sampling strategies can be distinguished:

  • Theoretical sampling: Participants are selected so that different perspectives, roles, or contexts are included in the analysis. Example: In a study on organisational culture, individuals from different hierarchical levels or departments may be interviewed.
  • Criterion-based sampling: Selection is based on clearly defined criteria, such as experience in a specific field, membership of a target group, or involvement in a relevant process.
  • Extreme or contrasting cases: Deliberate selection of particularly typical or atypical cases to highlight differences and tensions.
  • Snowball sampling: Starts with a few central respondents who then refer additional relevant individuals from their network. This is particularly useful when access to certain groups is difficult.

The size of the sample in qualitative research is not predetermined but follows the principle of theoretical saturation: interviews are conducted until no substantially new insights emerge and central categories are sufficiently supported.

If all respondents share similar professional experiences, organisational backgrounds, or personal ties, there is a risk that certain perspectives are overrepresented while others are missing. Familiarity or prior relationships between researchers and participants can also influence openness and authenticity - either through restraint in critical statements or adaptation to expected positions.

Qualitative research, however, aims at diversity of perspectives and sensitivity to context. It is therefore necessary to include participants from different organisations, institutions, or social settings. This allows contrasts to emerge, which in turn enable deeper understanding of the phenomenon under study.

It is essential that the selection process is methodologically justified and transparently documented. Researchers must make clear why certain individuals were included and how they contribute to the research aim. This prevents results from appearing as random individual cases and strengthens the integration of the research into scientific discourse.

Example formulations for the selection of interview participants in academic work:

  • The interview participants were selected based on the criterion that they had at least five years of experience in project management and were therefore able to contribute sound practical knowledge.

  • To capture different perspectives, individuals from various hierarchical levels were deliberately interviewed (department heads, team leaders, operational staff).

  • The selection followed the principle of theoretical saturation: interviews were conducted until no new insights emerged.

Pretests
Before the main study, it is advisable to test the interview guide in a pretest with one or two people from the target group. This allows researchers to check whether the questions are understandable, whether the order and transitions make sense, and whether the interview duration is realistic. Technical aspects (e.g. recording devices, online tools) should also be tested. Pretests help identify ambiguities or unnecessary complexity in the guide and correct them before the main study.

Conducting interviews in person, by telephone, or online

Guide-based interviews can be carried out in different formats. The choice of format should depend on the research question, organisational conditions, and the characteristics of the target group.

  • Face-to-face interviews
    Considered the "classic" form, this approach offers the most intensive interview situation. Personal contact allows for non-verbal signals such as facial expressions, gestures, or pauses to be considered, which can provide valuable additional information for interpretation. Face-to-face interaction also helps build a trusting atmosphere, encouraging openness among respondents. However, this method requires more organisational effort (e.g. scheduling, travel).

  • Telephone interviews
    Telephone interviews are location-independent and easier to organise. They are particularly useful when respondents are difficult to reach or have limited time. However, the lack of visual cues means part of the communicative signals are lost. This may reduce the personal character of the interview and limit the depth of responses.

  • Online video interviews
    Video conferencing combines elements of both face-to-face and telephone interviews. It allows direct interaction with visual contact while still being location-independent. This makes it particularly practical for internationally dispersed respondents. However, technical issues (connection loss, sound or video problems) may disrupt the flow of conversation. Furthermore, not all participants may have the necessary equipment or familiarity with the tools.

  • E-mail interviews - not really interviews!
    Written interviews conducted via e-mail differ methodologically from oral interviews. Since immediate follow-up questions and spontaneous probing are not possible, a central characteristic of interviews - interactivity - is lost. Responses also tend to be shorter, more controlled, or more formal. While e-mail interviews offer advantages such as flexibility in timing and the opportunity for respondents to carefully reflect on their answers, they do not reflect the true essence of an interview. Interviews depend on the dynamics of live conversation, which allows new aspects to be explored and content to be deepened. For this reason, e-mail formats are better classified as written surveys rather than interviews in the strict scientific sense.

Transcription of the Interview & Anonymisation

A key step in qualitative interview research is transcription. This refers to the transfer of spoken language into written form in order to enable systematic analysis. The transcript forms the basis for methods such as qualitative content analysis, grounded theory or discourse analysis. Different transcription standards exist and are chosen depending on the research interest:

Type Explanation Example
Verbatim transcription Every utterance is written exactly as it was spoken, including pauses, filler words and slips of the tongue. Suitable for analyses where linguistic details, interactions and forms of expression are relevant. "Um… well, I, uh, think that we actually did the project pretty well."
Smoothed transcription The spoken text is linguistically corrected and smoothed without changing the meaning. More readable when the focus is on content. "I think that we actually did the project quite well."
Extended transcription In addition to spoken words, non-verbal signals or particular emphases are documented. Useful for analyses that look at communication patterns or conversation dynamics. "I think that we [laughs] actually did the project quite well. (Pause, 3 seconds)"

Transcripts primarily serve as a working basis for the analysis of qualitative data. They make interviews systematically analysable, as spoken language is converted into written form that can be processed using methodological approaches (e.g. qualitative content analysis, grounded theory, discourse analysis). Without a transcript it would hardly be possible to code statements precisely, form categories, or reconstruct conversation structures.

The publication of full transcripts in academic papers, however, is not common. There are several reasons for this:

  • Data protection and confidentiality: transcripts often contain sensitive or personal data, and their publication would be ethically and legally problematic.
  • Space limitations: journals or theses usually do not allow for the inclusion of extensive interview texts.
  • Focus of the study: the scientific contribution lies in the analysis, not in presenting the full interview content.

Anonymisation is a central aspect of scientific integrity and ethical standards. It must be decided to what extent information can be anonymised or disclosed.

Original Anonymised Presentation
Noa Müller, Head of HR Department, Company XY, Munich Management role in human resources, medium-sized company in southern Germany
Interview with Phil Becker, Project Lead at Construction Company Z, Hamburg Project lead in a large construction company in northern Germany
Maxi Schmidt, Student at FH Kufstein Tirol Student at an Austrian university of applied sciences
  • Reason for disclosure:
    Respondents are named because they explicitly consented to publication and all information is publicly available.

  • Reason for anonymisation:
    All interviewees were anonymised to maintain confidentiality. Information on function and organisational context remains to ensure scientific traceability.

  • Reason for partial anonymisation:
    Interview data were anonymised to the extent that no conclusions about specific individuals are possible. For better contextualisation, position and general context are indicated (e.g. industry, company size).

Instead of full transcripts, verbatim quotes are usually used which represent specific themes, categories or lines of argument. These quotes are integrated into the analysis and contextualised. Many qualitative studies use simplified labels or codes instead of full references (e.g. "(Interview, project lead, own data, 2025)"). Typical are I1, I2, I3 … (Interview 1, 2, 3), P1, P2 … (Person 1, 2) or IP_A, IP_B … (Interviewee A, B), which both enhance readability and ensure anonymity.

Example of integrating a quote into an academic text:

The results show that the project leads interviewed assessed the process mainly positively. As one participant emphasised: “I think that we actually did the project quite well.” (Interview, project lead, own data, 2025).

The results show that the project leads interviewed assessed the process mainly positively. As one participant emphasised: “I think that we actually did the project quite well.” (I1).

If research data are to be made transparently accessible, e.g. for open science and replication studies, (anonymised) transcripts may be deposited in research data repositories. Access is usually controlled and permitted only for academic purposes.

Example formulations for the (non-)publication of transcripts in academic work:

  • The interviews were fully transcribed but only serve as the basis of analysis. Within this thesis only selected, anonymised quotes are published to illustrate the argument.
  • For reasons of data protection, full transcripts are not published. Central passages are included in the analysis in anonymised form.
  • The anonymised transcripts were archived in a protected research data repository and can be accessed there under controlled conditions for replication studies.

4.7.1 Areas of Application ^ top 

Guide-based interviews are particularly suitable when:

  • Subjective perspectives and interpretations are central and cannot be captured in fixed response categories.
  • Processes, experiences, and backgrounds need to be examined in detail, for instance regarding decision-making, motivations, or patterns of action.
  • Supplementary in-depth information is required to complement quantitative surveys (mixed-methods designs).
  • Complex topics should be studied in a structured yet flexible way, without losing the open character of qualitative research.
  • Expert knowledge is to be explored, which cannot be adequately represented by standardised scales.

4.7.2 Strengths and Weaknesses ^ top 

Strengths

  • Flexibility: Probing and in-depth exploration are possible, allowing answers to be clarified in their context.
  • Proximity to the field of study: Interviews make it possible to capture subjective meanings and personal experiences.
  • Structure: The interview guide ensures that key topics are addressed in a comparable way across interviews.
  • In-depth understanding: Interaction between interviewer and respondent enables going beyond superficial statements.

Weaknesses

  • Time and resource intensive: Interviews must be conducted, transcribed, and analysed in detail.
  • Dependence on interviewer: Questioning style, experience, and interviewing skills influence the results.
  • Limited comparability: Despite the guide, interviews vary in depth and level of detail.
  • E-mail interviews: Spontaneous dynamics are missing, probing is limited, responses are often shorter or more controlled. Misunderstandings can remain unnoticed, and participation depends strongly on respondents’ motivation.

4.7.3 Common Misconceptions ^ top 

A common misconception is that interviews are simply "conversations." In fact, they are a scientific method with clear objectives, requiring systematic planning and methodologically sound analysis.

It is also often assumed that guide-based interviews are fully standardised. However, the guide only specifies thematic blocks and core questions - order and depth can be adapted flexibly.

Another misconception concerns e-mail interviews. Some researchers consider them equivalent to oral interviews. In practice, however, direct interaction is missing: spontaneous probing, non-verbal signals, and conversational dynamics are lost. While e-mail interviews offer advantages such as flexibility in timing and the opportunity for respondents to carefully reflect on their answers, methodologically they are not comparable to face-to-face or video interviews.

Finally, it is frequently overlooked that interviews are also subject to data protection and ethical standards. Interview content must be treated confidentially, personal data protected, and informed consent obtained. Especially in the case of recordings (audio, video, transcripts), compliance with the General Data Protection Regulation (GDPR) is essential.


4.8 Text Analysis ^ top 

Text analysis refers to the systematic scientific examination of texts that themselves constitute the object of study. It thus differs fundamentally from systematic reviews, which evaluate research literature. In text analysis, the focus is on primary texts, i.e. documents that convey content directly without having already been scientifically interpreted or examined. These include, for example, legal texts, regulations, contracts, political strategy papers, organisational guidelines, or minutes.

The aim of text analysis is to uncover the content structures, linguistic patterns, or argumentative logics of such texts. The analysis may be descriptive, comparative, or interpretative in nature. The key point is that the text is not used merely as a source of data, but as an independent research object from which scientifically relevant questions are developed and addressed.

There are different methodological approaches, depending on which aspects of texts are the focus: content, structures, argumentation patterns, or discourses.

Type of Text Ana­lysis Cha­rac­te­ris­tics Ty­pi­cal Re­search Ques­tions Ex­am­ple
Qua­li­ta­tive Con­tent Ana­lysis Sys­te­ma­tic ca­te­go­ri­sa­tion of con­tent; struc­tured ap­proach (e.g. ac­cor­ding to May­ring, Kuck­artz) Which to­pics, re­gu­la­tions, or con­tents sys­te­ma­ti­cal­ly ap­pear in texts? How are they dis­trib­ut­ed? Struc­tured ana­ly­sis of a law ac­cord­ing to re­gu­la­to­ry sec­tions
Com­pa­ra­tive Text Ana­lysis Com­par­i­son of sev­er­al texts or ver­sions How do laws in dif­fer­ent coun­tries dif­fer? How has a re­gu­la­tion chan­ged from an old to a new ver­sion? Com­par­i­son of en­vi­ron­men­tal laws in two coun­tries
Ar­gu­men­ta­tion Ana­lysis Ana­ly­sis of lo­gi­cal and rhe­to­ri­cal struc­tures in texts How are laws or mea­sures jus­ti­fied? Which ar­gu­men­ta­tive pat­terns do­min­ate? Ana­ly­sis of the jus­ti­fi­ca­tion of a law
Dis­course Ana­lysis Tex­ts as part of so­ci­e­tal dis­cour­ses; fo­cus on fra­ming, lin­guis­tic pat­terns, and po­wer struc­tures How is a con­cept (e.g. "sus­tain­a­bi­li­ty") lin­guis­ti­cal­ly fra­med? Which nar­ra­tives are set in stra­te­gy pa­pers? Ex­am­i­na­tion of po­li­cy stra­te­gy doc­u­ments

The four approaches often overlap in practice but pursue different research aims: while qualitative content analysis identifies structures, comparative text analysis focuses on similarities and differences. Argumentation analysis reveals the underlying logic of reasoning, and discourse analysis examines how texts are embedded in social and linguistic contexts. What they all share is the understanding of texts as primary research objects, analysed in a methodologically reflective way. Text analysis thus opens up the possibility of making institutional rules, linguistic framings, or argument-based legitimisations visible and comparable.

4.8.1 Areas of Application ^ top 

Text analysis is particularly relevant when written documents are not merely used as background information but are treated as a central data source in their own right. It is suitable for research questions that address the content, structures, or meanings of institutional, legal, or organisational texts.

  • Analysis of laws and regulations
    Text analyses can reveal differences between legal versions (before and after a reform) or systematically compare regulations across countries. They can also show which thematic areas are emphasised and how terms are legally defined or linguistically framed.

  • Examination of contracts and guidelines
    Organisational documents such as employment contracts, works agreements, or internal policies contain normative provisions that shape action. Text analysis can examine which duties, rights, or responsibilities are emphasised and how they are expressed linguistically.

  • Evaluation of political strategy papers
    Political programmes, national strategies, or international agreements contain objectives and patterns of reasoning that can be systematically examined through text analysis. This allows priorities, tensions, and normative frameworks to be identified.

  • Comparison of institutional documents
    Text analysis is useful for identifying similarities and differences between organisations, regions, or sectors. For example, sustainability reports from different institutions can be compared to highlight trends and shifts in discourse.

  • Investigation of linguistic constructions and discourses
    Terms such as "sustainability,” "resilience,” or "innovation” are often not clearly defined. Text analysis can examine how such terms are used in different documents and how they are loaded with specific meanings. This makes visible how language contributes to legitimising measures or shaping social reality.

  • Analysis of reasoning structures
    In justificatory texts (e.g. legal justifications, policy statements, or management reports), typical argumentative patterns can be studied. Text analysis enables researchers to assess the logic and consistency of these arguments and to detect differences between actor groups.

4.8.2 Strengths and Weaknesses ^ top 

Strengths

  • Availability of data: Texts such as laws, strategy papers, or contracts are often publicly accessible and do not need to be generated through primary data collection. This also makes it possible to include extensive and historical documents.
  • Traceability: As texts represent stable data sources, analyses can usually be repeated or verified at any time. This increases transparency and replicability of research.
  • Level of detail: Texts often contain a wealth of information - from normative regulations and linguistic nuances to implicit meanings. With suitable methods of analysis, these can be systematically examined.
  • Comparability: Text analysis allows documents from different origins (e.g. countries, organisations, periods) to be compared, thereby highlighting developments or differences.
  • Interdisciplinary relevance: The method is applicable in technical, economic, and social sciences alike, as documents serve as central instruments of governance across all fields.

Weaknesses

  • Context dependency: Texts never exist in isolation but are embedded in political, legal, or organisational contexts. Without contextual knowledge, their meaning may easily be shortened or misunderstood.
  • Room for interpretation: Especially qualitative text analyses require a reflective approach, as multiple interpretations are possible. The subjectivity of the researcher must be controlled through transparent methodology.
  • Limited generalisability: Results relate to the specific documents analysed. They cannot automatically be generalised to all comparable texts or contexts.
  • Effort: The systematic evaluation of extensive texts is time-consuming, particularly when large collections of documents or multiple versions are compared.
  • Lack of completeness: Not all relevant texts are always accessible, for instance when organisations withhold internal documents. This may limit the explanatory power of the analysis.

4.8.3 Common Misconceptions ^ top 

A common misconception is the assumption that texts "speak for themselves" and that their meaning can be directly understood without methodological reflection. In reality, every text is embedded in a social, political, and institutional context that is crucial for interpretation. Without such contextual knowledge, important meanings may be overlooked or misinterpreted.

It is also often believed that text analysis merely involves counting the frequency of terms. While quantitative measures such as word counts can provide useful indications, they are insufficient for scientific analysis. Only embedding texts within categories, discourses, or argumentative structures allows substantial insights to be gained.

Another misconception concerns the objectivity of text analysis. Precisely because texts are often complex and ambiguous, their analysis requires interpretative decisions. These are not "arbitrary,” but they must be made transparent through methodological reflection. Ignoring the subjectivity of the researcher risks producing seemingly neutral but in fact biased results.

It is also frequently overlooked that texts alone are rarely sufficient to fully explain social or organisational phenomena. Text analysis can provide important insights but should, where possible, be complemented with other data sources (e.g. interviews, observations, statistical data) in order to achieve a more complete picture.

Finally, there is sometimes the assumption that text analysis is quick and straightforward because the data already exist. In practice, however, the systematic evaluation of extensive documents is time-intensive: categories must be developed, passages coded, and results interpreted.


 

 

If not stated differently, the contents of Research Design published on 23 August 2025 are © by Christian Huber, licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) . Reuse requires appropriate credit, a link to the licence, and an indication of any changes; you must not imply endorsement.
Assisted by AI
assisted by AI: Generative pre-trained transformers (large language models) were used for proofreading and translation. Content was reviewed before publication; Christian Huber is responsibility for accuracy and interpretation.
 
For publication details please see the Imprint.
 
For information on how personal data is processed please see the Privacy Policy.