License CC BY 4.0

Creative Commons 4.0 International Licence [CC BY 4.0]
 


This image "" (2025) by Christian Huber is licensed under a Creative Commons Attribution 4.0 International License. Feel free to use, edit, adapt, mix and distribute this content by citing the author, the source and the license and - if you have changed something, include this information too.
 


Creative Commons attribution citation for this image:
 
, 2025 Christian Huber, CC BY 4.0 http://creativecommons.org/licenses/by/4.0/,

DE  |  EN

Principles of Academic Research

methodological and epistemological principles at a glance


 

This lesson conveys the fundamental concepts of academic work. It focuses on the core principles of scientific knowledge, the research process, and standards of good scientific practice. Furthermore, it demonstrates how scientific thinking and methodological approaches are also relevant in professional contexts.

Identify the formal and substantive foundations of academic work and correctly classify key terms.

average course unit duration : 60 minutes


 


Summary [made with AI]

Note: This summary was produced with AI support, then reviewed and approved.


  • Academic work follows fundamental principles Findings should be generalisable, based on logical reasoning and empirical testing, and remain verifiable by others. Only in this way does connectivity emerge - the integration of new results into existing debates.
     
  • The methods differ Deduction derives hypotheses from theories, induction identifies patterns from data, abduction seeks plausible explanations for surprises. All three modes of reasoning have their place, often combined within the research process.
     
  • Verifiability means that results can be reconstructed intersubjectively. Subjective experiences must therefore be reflected, documented, and made transparent. Objectivity remains an ideal; what matters is the controlled disclosure of ones own position.
     
  • Good scientific practice demands more than methodological discipline. Binding principles are reliability, honesty, respect, and accountability. In addition, care, openness, proportionality, and participation matter - principles that also include ethical sensitivity, sustainability, and inclusion.
     
  • Research distinguishes four basic types exploratory, when new fields are investigated; descriptive, when phenomena are systematically recorded; explanatory, when causes are clarified and hypotheses tested; evaluative, when measures or programmes are assessed.
     

Topics & Content


 

 


 

 
Reflection Task / Activity ^ top 
Before you read on, take a few minutes to write down what science means to you. What experiences have you had with academic work - at school, in your studies, or at work?

Important: At this stage, do not consult any specialist literature or explainer videos. The aim is to surface your own ideas - independent of official definitions or concepts.

Principles of Science ^ top 

Academic work follows fundamental principles that apply regardless of discipline or subject. They ensure quality assurance, enable transparency, and create a shared basis for academic discourse. By adhering to these principles, scientific findings remain coherent and verifiable.

Connectivity is a central quality criterion in academic work. It describes the capacity of research outcomes to link into existing scholarly debates - being taken up, developed further, or critically discussed by other researchers. Science is not a closed system of truths but a continuous, intersubjective process of knowledge generation, in which each new result can connect with or deliberately contrast against - existing theories, questions, concepts, and methods. Connectivity does not imply agreement with the existing literature or theoretical stance. Critical engagement, counter hypotheses, and methodological innovations can all be connective if they provide transparently and traceably reference the current state of knowledge and justify their deviation or advancement. Research that lacks connectivity remains isolated and makes little contribution to collective knowledge production.

In practice, connectivity manifests at multiple levels:

  • conceptual: key terms are clearly defined and positioned in relation to other concepts.

  • theoretical: existing models are adopted, modified, or refuted.

  • methodological: employed methods are documented and compared with established standards.

  • empirical: findings are presented so that they can be compared with other studies or used to confirm, extend, or nuance existing results.

In interdisciplinary research, connectivity is especially important: findings must be formulated so that they remain comprehensible across disciplinary boundaries. This demands conscious reflection on language use, reference theories, and the differing knowledge interests of various fields. Ultimately, connectivity is closely linked to transparency and reproducibility in academic work. Only when research processes, arguments, and conclusions are openly disclosed can others build upon them - through replication, further development, or critique. Connectivity forms the communicative foundation of academic work: insights should not remain isolated but be embedded in existing debates and made accessible for others to develop further. Achieving this connectivity requires adherence to certain fundamental principles that safeguard the quality and robustness of scholarly claims. Three central principles lie at the heart of this framework:

  1. Generalisable knowledge: research must extend beyond single cases to produce systematically justified statements.

  2. Logical and empirical rigour: combining reasoned argument with empirical verification ensures conclusions are theoretically grounded and supported by observation or data.

1.Reproducibility: other researchers must be able to trace, verify, or replicate the methods, reasoning steps, and results.

These three principles constitute the methodological and epistemological scaffolding that makes connectivity possible. Only when claims are valid, logically consistent, and verifiable can they enter the academic dialogue and serve as the basis for confirmation, extension, or challenge.

The principles of science form the shared foundation for scholarly engagement.


1.1 Generalisable Knowledge ^ top 

A key aim of scientific research is to produce findings that go beyond individual cases. Science seeks generalisability - that is, statements that are transferable to other, comparable situations. When a single case is used to support a scientific argument or illustrate a hypothesis, it must be carefully assessed for its exemplary character. This means it should be selected and presented in such a way that it allows conclusions to be drawn about a broader group of cases or phenomena. Only then can genuinely generalisable knowledge be achieved.

Academic writing differs above all through its aim of generating generally valid knowledge, compared to other text types such as discussion essays or expert reports and professional statements in practice. The following table provides a structured comparison:

Fea­ture Ar­gu­men­ta­tive Es­say Ex­pert Re­port / Opi­nion Aca­de­mic Pa­per
Pur­pose Cri­ti­cal en­gage­ment with a top­ic; form­ing a rea­soned opin­ion As­sess­ment of a spe­cif­ic sit­ua­tion or meas­ure Gen­er­a­tion of new, sci­en­tif­i­cally ground­ed in­sights ad­dress­ing a de­fined ques­tion or re­search gap
Ba­sis Ar­gu­men­ta­tive struc­ture with pro and con per­spec­tives Pro­fes­sion­ally ground­ed as­sess­ment based on stand­ards, data, or ex­pert know­ledge The­o­ries, mod­els, em­pir­i­cal data, or sys­tem­at­ic lit­er­a­ture ana­ly­sis; meth­od­o­lo­gic­ally guid­ed
Con­text of Use Gen­er­al ed­u­ca­tion­al dis­course Case-spe­cif­ic, e.g. in en­gin­eering, med­i­cine, law Re­search con­text; trans­fer­a­ble to oth­er cas­es
Struc­ture / Ar­gu­men­ta­tion
  1. In­tro­duc­tion with the­sis or re­search ques­tion
  2. Pres­en­ta­tion of op­pos­ing views
  3. Rea­soned po­si­tion
  1. Es­tab­lish­ment of rel­e­vant facts
  2. Ana­lys­is and eval­u­a­tion based on ob­jec­tive cri­te­ria
  3. Con­clu­sive ex­pert opin­ion
  1. Re­search ques­tion or hy­poth­e­sis
  2. Choice of ap­pro­pri­ate meth­ods
  3. Evi­dence and pre­sen­ta­tion of re­sults
  4. Dis­cus­sion and con­text­u­al­i­sa­tion
Po­si­tion of the Au­thor Per­son­al view­point en­cour­aged, of­ten in the con­clu­sion Pro­fes­sion­al as­sess­ment with a per­son­al-ex­pert po­si­tion Per­son­al opin­ion is sec­on­dary; em­pha­sis on ob­jec­tive, ver­i­fi­able rea­son­ing based on the­o­ry and data
Lan­guage Style Fac­tu­al, with rhet­or­i­cal el­e­ments Pro­fes­sion­al­ly fac­tu­al, part­ly le­gal or tech­ni­cal Sci­en­tif­i­cally pre­cise, ob­jec­tive, and neu­tral
Use of Sour­ces
  • Gen­er­al know­ledge
  • News­pa­per ar­ti­cles
  • Opin­ions / val­ue judge­ments
  • Ex­am­ples
  • Sec­ond­ary lit­er­a­ture
  • Laws, stand­ards, guide­lines
  • Mea­sure­ments, cal­cu­la­tions
  • Ex­pert state­ments
  • \"State of the art\"
  • Pri­mary and sec­ond­ary lit­er­a­ture
  • Em­pir­i­cal stud­ies
  • Mod­els, sim­u­la­tions
  • The­o­ries
  • Meth­od­o­log­i­cally col­lect­ed data

The ability to work scientifically goes beyond simply gathering information. It requires structuring content in a methodologically sound and transparent way, embedding it into a broader system of knowledge. There is a difference between an argumentative text, an expert report with a professional judgement, and an academic paper aimed at generating new knowledge or critically evaluating existing findings.

Science is a dialogical process. It thrives on making findings transparent, subjecting them to critical discussion, and embedding them in existing theories and research frameworks. Only then can knowledge unfold its relevance beyond the individual case - knowledge that is, in short, connectable.

One term often mentioned in this context is representativeness. However, this is not synonymous with generalisability. Representativeness is a methodological concept, particularly relevant in quantitative research. It refers to the extent to which a sample reflects the characteristics of a larger population, e.g. in standardised surveys, secondary data analysis using large datasets, or simulations modelling real system behaviours. The demand for representativeness applies whenever results are to be generalised to a defined target population. Depending on the research design, this may also be relevant in interview studies - for instance, when standardised guide-based interviews are conducted with randomly selected participants. Case studies, too, can aim for theoretical representativeness by strategically selecting typical, extreme, or contrasting cases (theoretical sampling). In modelling and simulation, representativeness depends on whether the model is designed to generate generalisable conclusions about a population or system.

Generalisability, on the other hand, is a broader epistemic aim. It can be justified in various ways depending on the research approach:

  • quantitatively through statistical representativeness,
  • qualitatively through analytical generalisation, theoretical saturation, or typology building,
  • conceptually or theoretically through logical coherence and conceptual clarity.

Conclusion: Representativeness is one possible means of empirically supporting generalisable findings - especially in quantitative studies. However, it is neither universally required nor a guarantee of scientific quality. Non-representative studies - such as case studies, qualitative interviews or exploratory simulations - can also generate connectable and theoretically significant insights, as long as they are systematically reasoned, transparently documented, and contextually framed.

Reflection Task / Activity ^ top 
Find one example each of an argumentative essay, an expert opinion, and an academic paper - either online or from your previous academic or professional experience.

For each example, note:
- What was the aim and context of the text?
- How was it structured?
- What sources were used?
- How did the text handle subjective judgements?

Then compare your three examples:
- Where do you see the main differences?
- Where do you notice similarities?

1.2 Verification of Truth through Logic & Empiricism ^ top 

In academic research, statements are not simply asserted; they must be systematically justified. Scholars therefore need to ask themselves on what basis their reasoning rests and how it can be verified. At this point, two fundamental principles become relevant: logic and empiricism. By applying logic, thoughts can be structured and conclusions drawn consistently from existing assumptions. Empiricism, in contrast, refers to observations, measurements, or experiences. These two principles are not opposed to one another but rather interact and complement each other. In scientific practice, this interplay becomes particularly clear through three methods: - deduction, in which hypotheses are derived from theories

  • induction, in which new connections are established from empirical observations
  • abduction, in which a surprising observation leads to a plausible explanation.

The three approaches differ with regard to their starting point, their type of justification, and their role in the research process - depending on whether one moves from a general theory to specific cases (deductive), from specific observations to general statements (inductive), or from a single observation to an initial plausible explanation (abductive).

Aristotle: Deduction - Sherlock Holmes: Induction

Researcher: Abduction

The type of research question (e.g. open or closed) is not decisive for this distinction. What matters is how the reasoning process is structured.

The following overview contrasts the three modes of reasoning and highlights central differences regarding their starting point, purpose, evidential strength, and academic function. The examples illustrate how each form of logic is applied in research practice.

As­pect de­duc­tive ap­proach in­duc­tive ap­proach ab­duc­tive ap­proach
Di­rec­tion of rea­son­ing From the ge­ne­ral to the spe­ci­fic From the spe­ci­fic to the ge­ne­ral From an ob­ser­va­tion to the most plau­si­ble ex­pla­na­tion
Start­ing point Theory, mo­del or as­sump­tion Ob­ser­va­tions, ex­pe­ri­ence, da­ta Sur­pris­ing or un­ex­pec­ted ob­ser­va­tion
Pur­pose To test or ap­ply an ex­ist­ing theory To iden­ti­fy new pat­terns, to de­ve­lop hy­po­the­ses To formu­late an in­i­tial, plau­si­ble work­ing hy­po­the­sis
Evi­den­tial strength Log­i­cal­ly com­pel­ling if the theory is cor­rect Prob­a­ble, but not cer­tain Pos­si­ble and plau­si­ble, but not yet con­firmed
Re­search func­tion To test hy­po­the­ses, con­firm or re­fute theo­ries To gen­er­ate hy­po­the­ses, de­ve­lop theo­ries To de­ve­lop new ideas and ex­pla­na­tions
Ex­am­ple "If sus­tain­able build­ings in­crease us­er sat­is­fac­tion, then peo­ple in green build­ings should be more sat­is­fied." "Ma­ny us­ers re­port high­er sat­is­fac­tion in green build­ings - per­haps this is re­lat­ed to build­ing qua­li­ty." "The lawn is wet - it has prob­a­bly rained."

1.2.1 Deductive Reasoning ^ top 

Deductive reasoning is an epistemological process - that is, a way of reasoning concerned with how knowledge is generated and justified - in which specific conclusions are logically derived from general principles. This approach is often described as "top-down" because it proceeds from a general theory or hypothesis to specific predictions or conclusions. Deductive inferences are logically necessary: if the general principles are true, then the derived conclusions must also be true.

If A is true and B falls under A, then B must also be true.

In scientific practice, this means that a theory or model is used to formulate questions or hypotheses that can subsequently be tested empirically. The validity of deductive reasoning, however, depends solely on the correctness of the premises and the logical structure - not on empirical observation.

  1. General statement or theory:
    Deductive reasoning begins with a general statement or theory that is grounded in established principles or prior scientific knowledge. This general statement is referred to as the premise and forms the basis for the deductive process.

  2. Derived conclusion:
    The purpose of deductive reasoning is to derive specific predictions or conclusions from the general premise. These conclusions are, in principle, logically necessary: if the premise is true, the conclusion must also be true.

Typical Use Cases ^ top 
  • Hypothesis formulation: Researchers often use deductive reasoning to generate hypotheses. A hypothesis is a specific prediction based on a general theory. Deduction enables researchers to formulate expectations and then test these through observation or experimentation.

  • Theory testing: Deductive reasoning is employed to test and validate existing theories. When predictions derived from a theory are confirmed by empirical data, this supports the theory.

  • Falsification: Deductive reasoning also allows for falsification. If predictions derived from a theory are contradicted by empirical evidence, the theory may need to be revised or rejected.

Challenges ^ top 
  • Assumptions: Deductive reasoning relies on the accuracy of its premises. If these are flawed, the conclusions will also be invalid.

  • Complexity: In complex systems or real-world contexts, conclusions may not be as straightforward as theoretical examples suggest. Deductive reasoning can be more difficult to apply in such situations.

  • Uncertainty: Many areas of research involve uncertainty and variability. Deductive logic often abstracts from such factors and may not adequately reflect the nuance of empirical data.

1.2.2 Inductive Reasoning ^ top 

In contrast to deduction, inductive reasoning is based on specific observations and leads to general conclusions. This approach is often described as "bottom-up," as it proceeds from concrete data to general patterns or regularities. Inductive conclusions are not logically necessary, as they rely on probability and generalisation.

  1. Observations and data:
    The process of inductive reasoning begins with careful observation and data collection. Researchers gather information on the phenomena or events they wish to investigate.

  2. Identification of patterns or trends:
    Once sufficient data has been collected, researchers analyse it to identify patterns or trends. These may relate to recurring characteristics or behaviours.

  3. Generation of conclusions:
    Based on the identified patterns or trends, researchers formulate general conclusions or hypotheses. These conclusions are probabilistic in nature - that is, the patterns observed are likely, though not guaranteed, to apply to future cases or situations.

Typical Use Cases ^ top 
  • Pattern discovery: Researchers use inductive reasoning to detect patterns or trends in data. This is particularly useful in exploratory research, especially when little prior knowledge is available about a phenomenon.

  • Hypothesis generation: Patterns identified through inductive reasoning can serve as a basis for generating hypotheses, which may later be tested using deductive methods.

  • Theory development: Inductive reasoning can contribute to the development or refinement of theories. When repeated observations indicate consistent patterns, broader theoretical frameworks may be constructed.

Challenges ^ top 
  • Generalisability: Inductive reasoning involves generalising from specific observations. However, these generalisations may not hold true in all contexts.

  • Sample representativeness: The strength of inductive conclusions depends heavily on the representativeness of the sample. A non-representative sample can lead to distorted or unreliable conclusions.

  • Unforeseen variables: Inductive conclusions may be affected by unknown or uncontrollable variables that were not accounted for in the analysis.

1.2.3 Abductive Reasoning ^ top 

In contrast to deduction and induction, abductive reasoning aims to find a plausible explanation for a surprising or unexpected observation. This approach is often described as an "inference to the best explanation". Abduction does not provide logical certainty or a general rule, but rather an initial, provisional interpretation that serves as a starting point for further research.

  1. Unexpected observation:
    The process of abductive reasoning begins with an observation that cannot easily be explained with existing knowledge. Researchers encounter a phenomenon that raises questions.

  2. Plausible explanation:
    On the basis of this observation, they formulate a possible explanation. This explanation is not conclusive, but it appears reasonable and coherent in the given context.

  3. Working hypothesis:
    Abductive reasoning leads to a hypothesis that links the observation with the explanation. This hypothesis is provisional and must be examined further through inductive or deductive methods.

Application ^ top 
  • Exploratory research: Abduction is especially useful when little prior knowledge about a phenomenon is available. Researchers can develop initial explanations that are later tested systematically.

  • Hypothesis generation: Abductive reasoning makes it possible to formulate new hypotheses that are grounded in surprising or unexplained observations.

  • Qualitative research: In approaches such as Grounded Theory, abduction is used to derive preliminary interpretations from interviews or case studies, which are then further refined and tested.

Challenges ^ top 
  • Uncertainty: Abductive reasoning is neither conclusive nor highly probable; it is only plausible. Later findings may prove it wrong.

  • Subjectivity: What counts as plausible often depends on the researcher’s background knowledge and perspective.

  • Verifiability: Abduction alone does not provide verification. Only subsequent inductive and deductive testing can make an abductive hypothesis academically robust.

1.2.4 Exemplary Application in Research Methods ^ top 

The following table uses typical research methods and research designs to show the application of deductive, inductive and abductive reasoning:

Method Deductive application Inductive application Abductive application
Question­naire Questions are based on theoretical concepts or previous studies. Aim: to test a hypothesis. Exploratory items in a pilot study to identify relevant topics. Unexpected responses lead to a new, provisional hypothesis that is explored in further studies.
Interview (guide) Structure and question logic follow a theory or conceptual assumptions from the literature. Conversations without a fixed structure. Aim: to discover new perspectives; categories emerge only during analysis. Individual surprising statements by respondents inspire new interpretations or first explanatory approaches.
Simu­lation Model assumptions are based on theoretical knowledge or empirically established relationships. Model is developed from existing data sets (e.g. through data-driven analysis or machine learning). Unexpected simulation results serve as a starting point for formulating new hypotheses about system relationships.
Case study Case selection and analysis are guided by a theory or typical hypotheses from the literature. Case study is used to investigate complex phenomena openly. Theory emerges during analysis. Distinctive features in a case lead to an initial, plausible explanation that can be tested in further cases.
Secondary data analysis Literature-based hypotheses are tested systematically using available data. Aim is to explore existing data, e.g. to identify unexpected patterns or relationships. An unusual data pattern is interpreted as a clue to a new relationship and formulated as a hypothesis.
Conclusion ^ top 

Deductive, inductive, and abductive reasoning each have different strengths - all three are important in academic research:

  • Deduction: useful when theories or models already exist and need to be tested or applied.
  • Induction: suitable when a topic is new or when the aim is to explore it openly to identify relevant aspects.
  • Abduction: helpful when a surprising observation requires an initial plausible explanation, which can then be tested further.

In many research projects, these approaches are combined:

  • first abductively, to interpret an unexpected observation and propose an initial hypothesis,
  • then inductively, to collect further data and identify patterns,
  • and finally deductively, to test the hypotheses systematically.
Reflection Task / Activity ^ top 

Research what is meant by "deduction", "induction", and "abduction" in academic work.

Develop an example from your own field for each - one that follows a deductive approach, one that is inductive, and (if possible) one that is abductive.
Explain in your own words what the difference is - and for which purpose you would use each approach.


1.3 Verifiability ^ top 

Scientific work is founded on the verifiability of knowledge. This is enabled by intersubjective reasoning: a complex matter must be comprehensible and checkable by multiple observers. Since absolute objectivity is rarely attainable, intersubjectivity is considered a central quality criterion.

Intersubjektivität

1.3.1 Subjectivity ^ top 

Subjectivity refers to personal perspectives shaped by individual experiences, emotions or values. Subjective statements are not universally valid and usually cannot be independently verified.

  • Subjective judgements are based on personal perception.

  • They are not necessarily supported by data or facts.

  • Different individuals may interpret the same phenomenon differently.

1.3.2 Objectivity ^ top 

Objectivity refers to an ideal state of scientific knowledge where statements are entirely independent of the person making them. Objective statements must be reproducible under identical conditions, and understandable to anyone, regardless of individual background or perspective.

  • Objective statements hold true regardless of observer or context.

  • They are fully verifiable and consistently reproducible.

  • Objectivity is aspired to in science, but fully attainable only in certain fields (e.g. mathematics, physics).

1.3.3 Intersubjectivity ^ top 

Intersubjectivity denotes shared understanding between multiple subjects. It is the main criterion for scientific verifiability: a finding is considered valid when it can be independently reconstructed by others - especially within the scholarly community.

Basic principles ^ top 
  • Communication and documentation: Results are prepared so that others can evaluate and understand them.

  • Shared understanding: Terms, methods and interpretations are aligned within the scholarly community.

Applications ^ top 
  • Peer review: Scholars assess texts for consistency, transparency and academic quality.

  • Scientific discourse: Knowledge emerges through exchange - disagreement and consensus are integral to the process.

Challenges ^ top 
  • Cultural and linguistic contexts: Varied understandings may lead to misinterpretations.

  • Limited perspectives: A homogeneous research team may offer a narrow horizon.

  • Unconscious biases: Researchers, too, hold preconceptions that shape interpretation.

  • Complex phenomena: Even with full disclosure, some interpretations remain contested.

1.3.4 Controlled Subjectivity ^ top 

Personal experiences, values or theoretical positions influence how data are gathered, analysed and presented. Rather than eliminating these influences, scientific practice demands their explicit reflection and disclosure. This is known as controlled subjectivity.

Basic principles ^ top 
  • Self-reflection: Researchers question their own perspective and disclose personal assumptions, interests or experiences.

  • Transparent documentation: Decisions regarding methodology, scope or interpretation are systematically justified.

Applications ^ top 
  • Subjective influences are not avoided but made visible and critically reflected upon.

  • Readers can trace how results emerged and better assess their validity.

  • Controlled subjectivity enhances academic integrity and trustworthiness.

The following text modules provide practical examples of how controlled subjectivity can be implemented and made transparent throughout the research process - including the reflection of one's own role, the conduct of interviews, data interpretation, and methodological documentation.

Reflecting on the Researcher’s Role:

Within the study, the researcher did not operate solely from an external, observational perspective but was also actively embedded in the project context. This dual role - as both researcher and participant - enabled deeper insights into internal processes, but also carried the risk of unintentionally influencing data collection and interpretation.

To minimise potential bias, the researcher’s positionality was regularly reflected upon in line with the principle of controlled subjectivity. Key decisions, such as the selection of data, the formulation of questions, and the analytical procedures, were systematically documented and transparently justified.

Potential Influence on Data Collection:

  • The researcher consciously reflected on their personal proximity to participants and familiarity with the research context, as these factors may influence participants’ responses (e.g. through social desirability or implicit role expectations).*

To counteract such effects, care was taken to use open, non-leading questions during interviews, maintain neutral body language, and adopt a reserved, non-directive manner. The research role was clearly communicated in advance to clarify expectations and avoid role conflicts.

Potential Influence on Interpretation:

Data interpretation was conducted with ongoing reflection on possible preconceptions, which might arise from disciplinary background, project involvement, or personal experience.

Alternative interpretations were actively considered, and the analysis was complemented by peer feedback. Interpretations closely aligned with personal perspectives were explicitly marked to strengthen transparency and reveal potential interpretive frameworks.

Transparency and Academic Integrity:

Rather than being excluded, personal perspectives and theoretical orientations were explicitly acknowledged as part of the analytical process. These influences were made transparent through systematic documentation (e.g. reflexive notes, methodological justifications), thereby supporting research integrity and enhancing the trustworthiness of findings.

Challenges ^ top 
  • Blind spots: Personal positions are not always consciously recognised.

  • Role conflict: Researchers may simultaneously be observers, participants or stakeholders - requiring careful handling.

  • Lack of training: Dealing with subjectivity is often not formally taught in research practice.

  • Limits of disclosure: Not all influences can be fully made transparent or adequately expressed.

1.3.5 Subjectivity - Objectivity - Intersubjectivity - Controlled Subjectivity Compared ^ top 

A comparative table illustrates the four concepts and typical examples of their application in scientific contexts. It supports reflection on the levels of positionality and verifiability in scholarly claims.

Le­vel De­scrip­tion Ex­am­ple
Sub­jec­tivity Per­son­al per­cep­tion shaped by in­di­vid­ual ex­per­i­ence and emo­tion. "I feel the win­ters in the Alps are milder to­day than in the past."
Ob­jec­tivity State­ment that is in­de­pen­dent of the ob­serv­er and re­pro­du­ci­ble un­der iden­tic­al con­di­tions. "Tem­per­a­ture meas­ure­ments over 30 years show a sig­nif­i­cant de­cline in snow days."
In­ter­sub­jec­tivity Agree­ment with­in a sci­en­tif­ic com­mun­i­ty on terms, meth­ods and re­sults. "Cli­mate re­search­ers world­wide agree through data ana­lys­is and dis­cus­sion on an­thro­po­gen­ic cli­mate change."
Con­trol­led Sub­jec­tivity Re­flex­ive and trans­par­ent dis­clo­sure of in­di­vid­ual per­spec­tives to en­hance ver­i­fi­a­bil­i­ty. "As I was di­rect­ly in­volved in the or­gan­is­a­tion­al de­ci­sion‑mak­ing, I trans­par­ent­ly re­flect­ed on my role and its in­flu­ence in the re­search pro­cess."
Reflection Task / Activity ^ top 
Research the term "intersubjectivity" and distinguish it from "subjectivity" and "objectivity".

Consider: Why is intersubjectivity so important for scientific verifiability? Find an example from your field where intersubjectivity plays a role.

2. Standards of Good Scientific Practice ^ top 

Science thrives on trust: trust in research quality, in the integrity of researchers, and in the verifiability of findings. To uphold this trust, the international scientific community has developed a widely accepted consensus on standards and guidelines for Good Scientific Practice (GSP). These standards define how research should be planned, conducted, documented, and communicated - and where misconduct begins. Universities, funding bodies, and research institutions issue corresponding guidelines, for instance:


2.1 European Code of Conduct for Research Integrity ^ top 

The European Code of Conduct for Research Integrity, developed by ALLEA (All European Academies), is a widely recognised framework. Founded in 1994, ALLEA unites over 50 national science academies across more than 40 European countries - including the Austrian Academy of Sciences, the German Leopoldina, and the UK’s Royal Society. ALLEA’s objectives include fostering scientific exchange in Europe through:

  • Advising on science policy,

  • Promoting research freedom and integrity,

  • Developing shared standards for good scientific practice.

The European Code of Conduct - drafted in collaboration with the European Commission - outlines four central principles:

Prin­ciples Mean­ing
Re­li­a­bil­i­ty Re­search must be meth­od­o­log­ic­ally sound, care­fully planned and trans­par­ent­ly con­duct­ed.
Hon­es­ty All find­ings and state­ments must be pre­sent­ed truth­ful­ly, com­plete­ly and open­ly.
Re­spect The rights, dig­ni­ty and in­ter­ests of hu­mans, an­i­mals, the en­vir­on­ment and cul­tur­al her­it­age must be re­spect­ed.
Ac­count­a­bil­i­ty Re­search­ers bear re­spons­i­bil­i­ty for their work and its con­se­quen­ces - from plan­ning through pub­lic­a­tion.

The Code also provides practical guidance on:

  • Publication ethics and authorship roles,

  • Data management and archiving,

  • Peer review, evaluations, and mentoring,

  • Addressing misconduct, conflicts and whistleblowing.

It serves both as a behavioural standard for researchers and as a guideline for institutions, ethics committees and funding agencies.

2.1.1 Reliability ^ top 

Research should be systematic, methodologically rigorous and conducted with utmost care. Scientific insights must be based on transparent, verifiable and consistent methods - not chance or arbitrariness.

  • Recognised methodology: Researchers select appropriate methods corresponding to the research question and justify these decisions transparently. Methods must be disciplinary standard and well-reasoned.

  • Transparency in the research process: Every stage - from data collection through analysis to interpretation - is documented so others can understand, evaluate or replicate the process. Preliminary decisions (e.g. case selection, data exclusion) must be traceable.

  • Reproducibility & repeatability: Studies should be replicable under similar conditions. This also includes includes listing all materials used, software versions, and analysis criteria.

  • Consistency and plausibility: Results should remain comparable across tests, data sources or time points. Any inconsistencies must be explicitly discussed rather than concealed.

2.1.2 Honesty ^ top 

Integrity is foundational to scientific work. Researchers must communicate transparently, impartially and fully throughout the research process.

  • Transparency towards third parties: All relevant information - such as study design, funding sources and potential conflicts of interest - must be disclosed. Concealment is unacceptable.

  • Unbiased presentation of data and findings: Results should be presented objectively, without deliberate distortion in favour of hypotheses.

  • Comprehensive reporting: Unexpected, insignificant or contradictory findings must also be reported. Omitting selective data (reporting bias) violates research integrity.

  • Fair authorship and citation: Contributions from others must be properly credited. Ghostwriting, honorary authorship or plagiarism are impermissible. Those who did not make substantial contributions should not be listed as authors.

2.1.3 Respect ^ top 

Research never occurs in a vacuum - it concerns people, animals, society, and the environment. Ethical awareness and responsibility are required at all times.

  • Respect for human dignity: Participants must provide voluntary, informed consent. Privacy and confidentiality must be protected.

  • Avoidance of harm: Research must not cause psychological or physical harm - to respondents, observed individuals, affected groups or society at large.

  • Environmental responsibility: Research should consider ecological effects, use resources responsibly, and avoid unnecessary emissions or strain.

  • Cultural and historical sensitivity: Respect and sensitivity are necessary when working with heritage, historic sites or socially sensitive topics.

  • Animal welfare: The 3Rs must be observed: Replace (use alternatives where possible), Reduce (minimise the number of animals), Refine (minimise suffering).

2.1.4 Accountability ^ top 

Scientific research entails responsibility - to the scholarly community, the public, one’s institution and future generations.

  • Diligent documentation and archiving: Data, analysis steps, materials and versions are recorded in a way that ensures long-term accessibility and scrutiny. Empirical studies must comply with data retention requirements.

  • Disclosure of funding and interests: Funding sources, institutional ties and potential conflicts must be declared, especially in evaluations, industrial research or consultancy.

  • Handling of errors: Mistakes are not taboo but part of the scientific discourse. Errors should be acknowledged openly, and published findings corrected or retracted if necessary.

  • Quality culture and mentoring: Senior researchers bear responsibility for transmitting good scientific practice to students, doctoral candidates and colleagues. Ethics training, supervision and discussion are integral to scholarly qualification.


2.2 Complementary Principles in Research Practice ^ top 

In addition to the four core principles of the ALLEA Code (Reliability, Honesty, Respect, and Accountability), many national and discipline-specific guidelines include further principles and attitudes that characterise reflective scientific practice. These expand or refine the ALLEA principles to address contexts where particular considerations - such as ethical sensitivity, stakeholder inclusion, or sustainability - are key.

2.2.1 Care ^ top 

Care entails methodical discipline, reflection and precision at every stage of scientific work:

  • Formulating the research question: The question is clearly articulated, justified, and developed without unexamined pre-assumptions.

  • Conducting data collection: The chosen method is implemented with care - adhering to protocols, maintaining precise records and applying appropriate prompts.

  • Analysis & interpretation: Interpretation is transparent and logical; statements are grounded in data, not speculation or wishful thinking.

  • Citation & source usage: Ideas, arguments or findings from others are accurately cited; secondary sources are clearly identified.

  • Data handling: Research data are carefully stored, versioned and protected against loss or unauthorised access.

2.2.2 Openness ^ top 

Openness denotes a willingness to communicate research transparently, share findings, and remain receptive to criticism or alternative perspectives:

  • Process and goal transparency: Study design, assumptions, methodological choices and limitations are openly disclosed.

  • Access to results: Findings are published in open formats where possible (e.g. Open Access).

  • **Access to data: Data are documented and shared to enable reuse, verification or comparison (e.g. Open Data).

  • Willingness to engage in dialogue: dissenting views are welcomed as opportunities to broaden understanding.

  • Interdisciplinary collaboration: researchers actively seek and maintain cooperation across disciplinary boundaries.

2.2.3 Proportionality ^ top 

Research must be ethically justifiable. Proportionality implies that the expected benefits are appropriate relative to the effort, risks or burdens involved:

  • Consideration of sensitive topics: Personal or distressing questions are only asked if essential to the research aims.

  • Reducing participant burden: Those involved are not subjected to unnecessary physical or psychological stress.

  • Responsible use of resources: Time, finances, materials, and environmental resources are used efficiently.

  • Ethical evaluation in animal research or interventions: alternatives are considered and unnecessary repetition avoided.

2.2.4 Participation ^ top 

Participation involves actively involving individuals or groups directly or indirectly affected by the research, thereby enhancing relevance, ethical integrity and acceptance.

  • Involvement in planning: Stakeholders and affected groups contribute to topic selection or formulation of research questions.

  • Collaborative data collection and analysis: Data gathering and interpretation involve participants (e.g. community-based research).

  • Transparent communication of results: Findings are presented in accessible formats and shared with participants.

  • Recognition of experiential knowledge: Subjective experiential insights are respected as valuable contributions to analysis.

2.2.5 Additional Context-Specific Principles ^ top 

Depending on the field, institution or research context, additional principles may further support scientific integrity and societal responsibility:

  • Sustainability: Research is designed with ecological, social and institutional longevity in mind.

  • Transdisciplinarity: Systematic collaboration between academia and practice is actively pursued.

  • Diversity and inclusion: Research encompasses varied perspectives and avoids discrimination or systematic exclusion.

  • Responsible use of technology: New technologies are assessed not only for efficiency but also for social implications, power relations or ethical risks (e.g. AI, surveillance, genetic engineering).

These additional principles reinforce scientific responsibility and ethical grounding - especially in transdisciplinary and practice-oriented fields such as sustainability, public health research and social innovation.

Reflection Task / Activity ^ top 
Examine two different codes of scientific conduct (e.g. ALLEA, DFG, OeAWI).

Which principles are shared? Which are emphasised differently? Consider how you might apply these standards concretely in your studies or research project.

3. Types of Scientific Research ^ top 

Academic research is a multifaceted process aimed at the systematic investigation of phenomena to generate new knowledge, test existing theories, and develop practical solutions to current challenges. Differentiating between various types of research helps to better classify research projects, understand methodological approaches, and interpret findings appropriately.

The following section introduces four core types of scientific research. Each serves a distinct function within the process of knowledge generation - from initial orientation within a topic area to evaluating the effectiveness of specific interventions. These four core types - exploratory, descriptive, explanatory, and evaluative - provide a foundational framework that reflects the research interest of a given study. Many additional types of research, such as prognostic, interventional, theoretical, or normative, either derive from or complement these fundamental forms. They are often expanded or specialised applications of the four central research logics.


3.1 Overview: Exploratory, Descriptive, Explanatory, and Evaluative Research ^ top 

The four basic types of scientific research can be distinguished by their objectives, theoretical grounding, and methodological approaches. Each represents a different pathway to knowledge that can be applied depending on the research question and context. Exploratory research primarily investigates underexplored phenomena, while descriptive research aims to systematically capture existing states or trends. Explanatory research seeks to identify causal relationships and test hypotheses. Evaluative research, in turn, assesses the effectiveness or impact of actions and programmes. The following table compares key characteristics of these four research types and supports their differentiation in academic practice.

Re­search Type Ob­jec­tive Theo­re­ti­cal Ba­sis Me­tho­do­lo­gy Ty­pi­cal Out­comes
Ex­plo­ra­to­ry To ex­plore new top­ics, ge­ne­rate hy­po­the­ses Low or open Flex­i­ble, of­ten qual­i­ta­tive In­i­tial in­sights, con­cepts, re­search ques­tions
De­scrip­tive To sys­tem­a­ti­cally de­scribe phe­nom­e­na Op­tion­al Stan­dard­ised, usu­al­ly quan­ti­ta­tive Fre­quen­cies, dis­tri­bu­tions, sta­tus re­ports
Ex­pla­na­to­ry To ex­plain caus­es and re­la­tion­ships High Stan­dard­ised, quan­ti­ta­tive Ex­pla­na­to­ry mod­els, hy­po­the­sis test­ing
Ev­al­u­a­tive To as­sess in­ter­ven­tions or pro­grammes Goal- or the­o­ry-ori­ent­ed Mixed meth­ods com­mon As­sess­ments (e.g. ef­fec­tive­ness, ef­fi­cien­cy)

3.2 Exploratory Research ^ top 

Explores unknown territory by searching for hidden patterns in data or investigating the behaviour of a phenomenon.

Exploratory research is applied when there is little prior knowledge, no established theories, or existing explanatory models prove inadequate. Its aim is to provide orientation, develop initial explanatory approaches, and open up new research perspectives. It is particularly suitable in early project phases or when addressing complex, interdisciplinary questions.

Fea­ture De­scrip­tion
Aims Pre­lim­i­nary ori­en­ta­tion, dis­cov­ery of new pat­terns
The­o­ret­i­cal re­la­tion Open or lim­it­ed
Meth­od­o­log­i­cal ap­proach Flex­i­ble, fre­quent­ly qual­i­ta­tive
Data anal­y­sis De­scrip­tive, struc­ture-seek­ing, hy­poth­e­sis-gen­er­at­ing
Typ­i­cal meth­ods In­ter­views, fo­cus groups, case stud­ies, lit­er­a­ture re­views
Chal­lenges Lim­it­ed trans­fer­a­bil­i­ty, in­ter­pre­ta­tive am­big­ui­ty, lack of the­o­ret­i­cal frame
Basic Principles ^ top 
  • Aims: Exploratory research seeks to better understand phenomena that are poorly understood or newly emerging. The goal is to gain an initial overview and identify potential influencing factors or relevant relationships.

  • Theoretical relation: Research often commences without a fixed theoretical framework. Theories or concepts emerge from collected data or are applied retrospectively to contextualise findings.

  • Flexibility: Researchers adapt their methodological approach in response to emerging data and analysis. New insights may influence subsequent phases of data collection, which is especially common in qualitative designs.

  • Epistemic interest: The aim is not to test pre-defined hypotheses, but to generate new research questions, conceptual frameworks, and hypotheses for subsequent investigation.

  • Typical methods: Commonly employed methods include narrative interviews, focus groups, ethnographic observations, alongside exploratory quantitative methods (e.g. cluster analysis, correlational patterns).

Challenges ^ top 
  • Limited generalisability: Findings from exploratory studies are often context-specific. They provide initial hypotheses rather than statistically validated results.

  • Interpretive ambiguity: Due to the openness of the research process, interpretation plays a central role, requiring methodological sensitivity and transparent documentation.

  • Lack of theoretical embedding: Without grounding in existing theory, findings may remain isolated, limiting their connectivity and academic relevance.

  • Subjective bias: The researcher’s perspective is actively shaping. To mitigate bias, reflective practice and methodological safeguards (e.g. triangulation) are essential.

Examples & Application Fields ^ top 
  • Identifying emerging social indicators for sustainability reporting in small and medium-sized enterprises.

  • Exploring barriers to adopting building‑related energy data within ESG reporting frameworks.

  • Capturing previously unconsidered perspectives on sufficiency strategies in corporate energy management, for example via interviews with facility managers.

  • Conducting exploratory case studies on user perceptions of service quality in modern office designs.

  • Investigating factors influencing vacancy trends in regional secondary real estate markets.

  • Interviewing technical facility managers about obstacles to digitising maintenance processes.

Exemplary Methods ^ top 
  • Literature review: Systematic analysis of existing studies on sustainable land use management or ESG indicators within the real estate sector.

  • Expert interviews: Semi-structured interviews with energy consultants, property managers or sustainability officers to capture their assessments and professional experience.

  • Focus groups: Moderated discussions with diverse stakeholder groups, such as building users, investors, or planners.

  • Case studies: In-depth analysis of selected real estate projects that demonstrate innovative approaches to climate protection, energy efficiency, or user participation.

Possible Applications ^ top 

Findings from exploratory research often serve as a foundation for subsequent research phases, including:

  • Development of a theoretical model that systematises the identified factors.

  • Derivation of testable hypotheses for quantitative or experimental studies.

  • Design of applied research projects, e.g. on new ESG indicators or strategies to optimise sustainability performance in building operations.


3.3 Descriptive Research ^ top 

Describes phenomena precisely as they appear - without altering, explaining, or evaluating them.

Descriptive research systematically and structurally captures the current state of a phenomenon. It is employed to document characteristics, distributions, or associations without necessarily offering theoretical explanations. The aim is to generate an accurate representation of reality. Descriptive studies often serve as a foundation for subsequent analytical or explanatory research by providing essential baseline data.

Fea­ture De­scrip­tion
Aim Sys­tem­at­ic de­scrip­tion of phe­nom­ena, con­di­tions, dis­tri­bu­tions
The­or­et­ic­al ground­ing Of­ten pres­ent, but not es­sen­tial
Meth­od­o­lo­gic­al ap­proach Stand­ard­ised, of­ten quan­tit­at­ive
Data ana­lys­is Stat­ist­ic­al-de­scrip­tive (fre­quen­cies, means, stand­ard de­vi­a­tions, etc.)
Com­mon meth­ods Stand­ard­ised sur­veys, sec­ond­ary data ana­lys­is, ob­serv­a­tions
Chal­lenges No caus­al claims, meas­ure­ment is­sues, risk of bias
Basic Principles ^ top 
  • Aim: The goal is to capture attributes, behaviours, or attitudes of a specific target group or research object with precision. Descriptive research is not intended to test hypotheses or explain causal mechanisms; rather, it documents what currently exists.

  • Theoretical grounding: Descriptive research can be informed by theory (e.g. through structured data collection instruments), but it is not necessarily theory-bound. Classifications, definitions, or reference values often provide orientation.

  • Methodological approach: Standardised quantitative methods are commonly used. In some contexts, structured qualitative techniques such as systematic observation or coding of open responses may also be applied.

  • Research interest: Descriptive research focuses on "how frequent", "how strong", "in which manifestations" - rather than causes. Its strength lies in producing empirically substantiated clarity and comparability.

  • Common methods: Typical methods include structured questionnaires, counts, analysis of existing datasets (e.g. energy consumption, vacancy rates), and systematic observations.

Challenges ^ top 
  • No causal claims: Descriptive research does not provide causal explanations. Even if correlations are reported, no causal relationships can be inferred.

  • Measurement issues: The quality of the results depends heavily on the operationalisation. Imprecise questions, unclear categories, or poorly executed data collection can distort findings.

  • Risk of bias: Even with standardised methods, systematic errors may occur - for example, due to non-response bias in surveys or selective data availability in secondary data analyses.

  • Interpretation of data: Even without claiming causality, descriptive data must be interpreted with care. Context, data sources, and possible influencing factors must be considered.

Examples and Areas of Application ^ top 
  • Assessment of average energy consumption per square metre in office buildings in Austria.

  • Descriptive analysis of user satisfaction with facility services (e.g. cleaning, indoor climate) based on standardised questionnaires.

  • Measurement of vacancy rates in the property portfolios of municipal housing providers across different regions and time periods.

  • Content analysis of publicly available ESG reports to determine the frequency of key indicators (e.g. CO2 emissions per employee).

  • Observation and documentation of workspace utilisation in co-working environments during weekdays.

Exemplary Methods ^ top 
  • Online survey: Quantitative study on tenant satisfaction with energy management in residential buildings, including frequency distributions and mean values.

  • Secondary data analysis: Use of data from statistical offices, energy suppliers, or industry reports to describe market or consumption structures.

  • Structured observation: Systematic recording of space usage (e.g. meeting rooms, communal areas) in public buildings using a predefined observation grid.

  • Monitoring reports: Regular status reports on sustainability indicators within organisations or institutions, e.g. carbon footprint or water consumption.

Possible Applications ^ top 

Descriptive research frequently provides the basis for:

  • Benchmarking across properties, regions, or time periods (e.g. energy consumption per m2).

  • Identification of anomalies, trends or deviations that indicate the need for further investigation.

  • Development of data-driven indicator systems, for example for ESG reporting or sustainability controlling.

  • Preparation of follow-up studies by selecting relevant variables for explanatory or experimental research.


3.4 Explanatory Research ^ top 

Seeks to clarify why a phenomenon occurs by analysing cause-effect relationships and testing verifiable hypotheses.

Explanatory research - also referred to as causal or analytical research - aims to identify relationships between variables, test hypotheses, and uncover causal links. It frequently builds upon descriptive findings and often employs quantitative methods to deliver well-founded explanations for observed phenomena. The central question is: Why is something the way it is?

Tabular Overview: Explanatory Research ^ top 

Fea­ture De­scrip­tion
Goal Ex­plana­tion of cau­ses, mech­a­nisms, and re­la­tion­ships
Theo­ret­i­cal ref­er­ence High - theo­ries and hy­poth­e­ses are cen­tral
Meth­od­o­lo­gy Stan­dard­ised, pri­mar­i­ly quan­ti­ta­tive
Data anal­y­sis Sta­tis­ti­cal in­fer­ence (e.g. re­gres­sion, sig­nif­i­cance tests)
Typ­i­cal meth­ods Ex­per­i­ments, cross-sec­tion­al or lon­gi­tu­di­nal stud­ies, re­gres­sion anal­y­sis
Chal­lenges Prov­ing cau­sal­i­ty, con­trol groups, con­found­ing vari­a­bles, va­lid­i­ty
Basic Principles ^ top 
  • Goal: The aim is to understand which factors cause a given outcome. Explanatory research goes beyond description by identifying systematic influences and testing theoretically derived hypotheses.

  • Theoretical reference: Theories provide the foundation. Hypotheses are derived from theoretical models and tested empirically. Ideally, findings contribute to theory refinement.

  • Methodology: Explanatory research requires precise, controlled methods. Common approaches include laboratory and field experiments or large-scale quantitative designs. Internal validity - ensuring that observed effects are due to the studied variables - is essential.

  • Research interest: It seeks to clarify mechanisms of influence - e.g. does an energy management system actually reduce electricity consumption? What is the impact of user behaviour on indoor climate?

  • Typical methods: Randomised controlled trials, quasi-experiments, multivariate regression, time-series analysis, or structural equation modelling are frequently applied.

Challenges ^ top 
  • Demonstrating causality: Providing empirical evidence for causal links is methodologically demanding. This requires clearly defined hypotheses, control groups, and the consideration of confounding factors.

  • External validity: What holds true in a dataset or controlled experiment may not generalise to other settings. A balance between internal and external validity is essential.

  • Operationalisation: Theoretical concepts must be translated into measurable variables using valid scales or indicators. Inaccurate operationalisation compromises reliability.

  • Data requirements: High-quality, comprehensive datasets are needed. Missing values, bias, or small samples can distort results.

Examples and Application Areas ^ top 
  • Examining whether certification according to DGNB or ÖGNI correlates with significantly improved energy performance in new buildings.

  • Analysing how user behaviour (e.g. window opening, equipment use) influences heating demand in office buildings.

  • Testing whether ESG reports with high transparency scores lead to better investor feedback (e.g. using regression models with control variables).

  • Conducting experimental studies to determine if visualised CO2 traffic lights influence ventilation behaviour in shared office spaces.

  • Measuring the effectiveness of information campaigns on waste reduction in residential complexes.

Exemplary Methods ^ top 
  • Experiment: Random assignment of buildings to two groups - one receives an advanced energy monitoring tool, the other does not. Compare consumption afterwards.

  • Regression analysis: Assess the influence of location, building age, and technical standard on operating costs per m2.

  • Time-series analysis: Study user satisfaction before and after introducing sustainable cleaning services.

  • Quasi-experiment: Compare two real estate locations with differing ESG strategies under otherwise similar conditions to evaluate performance impact.

Possible Applications ^ top 
  • Evidence-based recommendations for sustainability, energy, or real estate strategies.

  • Assessment of the effectiveness of interventions in facility management and operations.

  • Theoretical advancements in sustainable building management.

  • Reliable decision support for policymakers, practitioners, and institutions.


3.5 Evaluative Research ^ top 

Systematically assesses the effectiveness, impact, or quality of measures, programmes, or processes based on pre-defined criteria.

Evaluative research examines whether and to what extent certain interventions or programmes achieve their intended objectives. It is typically applied in practical contexts, such as project monitoring, pilot evaluations, or policy reviews. At its core lies the assessment - not merely the description or explanation. It is based on a systematic evaluation design employing qualitative, quantitative, or mixed methods.

Cha­rac­ter­is­tic De­scrip­tion
Goal As­sess­ment of ef­fect­ive­ness, ef­fi­cien­cy, im­pact or qual­i­ty
The­ory Re­la­tion Goal-­ori­ent­ed, of­ten the­ory-­driv­en (e.g. lo­gic mod­els)
Meth­ods Mixed meth­ods are com­mon; con­text-de­pend­ent
Data Ana­ly­sis De­pend­ing on ob­jec­tives: qual­i­ta­tive, quan­ti­ta­tive or in­te­grat­ed
Typ­i­cal Meth­ods Im­pact ana­ly­sis, sur­veys, goal at­tain­ment checks, mul­ti-­cri­te­ria de­ci­sion mak­ing
Chal­lenges Goal con­flicts, am­bi­gu­i­ty of cri­te­ria, at­tri­bu­tion of ef­fects

Foundational Aspects & Characteristics ^ top 

  • Goal: Evaluative research aims to assess, rather than merely observe, interventions or initiatives. It provides decision-makers in policy, administration, and industry with reliable information regarding the usefulness or necessary adjustments of a programme.

  • Theory relation: Evaluations often rely on an impact logic model (e.g. input - output - outcome - impact) that articulates the assumed causal mechanisms behind a measure. This model serves as the frame of reference for assessment.

  • Methods: Depending on the research question, qualitative, quantitative, or mixed methods are applied. Mixed-methods designs are common as they enable the analysis of both outcomes and context-related acceptance factors.

  • Research interest: The main question is not theoretical development, but: "Does this intervention work - for whom, in which context, and at what cost?" Evaluation criteria may include effectiveness, efficiency, sustainability, or acceptance.

  • Typical methods: Stakeholder interviews, target achievement analyses, before-and-after comparisons, cost-benefit analyses, or standardised feedback systems.

Challenges ^ top 

  • Goal conflicts: Different stakeholders may pursue conflicting expectations. Evaluations must manage and transparently communicate competing perspectives.

  • Unclear evaluation criteria: What exactly constitutes "success" is not always evident. Criteria must be jointly defined, plausibly operationalised, and documented transparently.

  • Attribution of effects: In complex environments, it is often difficult to clearly attribute observed effects to a specific intervention.

  • Role ambiguity: Evaluators frequently operate between research, consultancy, and control. This requires critical role reflection and ethical clarity.

Examples & Fields of Application ^ top 

  • Evaluation of an energy-saving programme in schools: Were energy savings achieved? What educational outcomes were observed?

  • Assessment of the effectiveness of ESG training programmes for facility managers in large corporations.

  • Analysis of the success of a sustainable housing concept in municipal building projects (e.g. passive house standard, mobility connections, social inclusion).

  • Evaluation of user participation in office planning: Was the participatory process perceived as effective and meaningful?

  • Assessment of a CAFM system’s efficiency in technical building management in terms of time savings and data quality.

Sample Methods ^ top 

  • Before-after comparison: Evaluation of energy metrics and user feedback before and after the implementation of an energy monitoring system.

  • Goal attainment analysis: Assessment of whether sustainability goals (e.g. CO2 reduction, accessibility) in a construction project were met.

  • Multi-criteria analysis: Structured evaluation of different site options for a green building project, using weighted indicators such as accessibility, cost, and environmental impact.

  • Interviews and focus groups: Qualitative assessment of perceived impacts by users, stakeholders, or project managers.

Potential Applications ^ top 

  • Strategic decisions on continuation, modification, or termination of measures or programmes.

  • Accountability towards funders, public authorities, or the wider public.

  • Development of good practice models in facility management or sustainable building operation.

  • Quality enhancement and optimisation through formative evaluation during implementation.

Reflection Task / Activity ^ top 

Find three academic studies in a field of your choice.

Assign each study to one of the four research types discussed in this chapter - and justify your categorisation in one to two sentences.

4. Bachelor's Thesis, Master's Thesis & Doctoral Dissertation Compared ^ top 

Student academic papers at universities differ not only in scope but also in their objectives and research ambitions. What they have in common is that they are based on academic methods and sources, yet with each level the requirements for theoretical depth, research competence and contribution to academic discourse increase. The following table contrasts key characteristics:

Fea­ture Bach­e­lor’s The­sis Mas­ter’s The­sis Doc­tor­al Dis­ser­ta­tion
Ob­jec­tive Sum­ma­rise, re­flect on, and ap­ply ex­ist­ing know­ledge; first steps in in­de­pend­ent ana­ly­sis Con­duct in­de­pend­ent re­search aim­ing to con­trib­ute new find­ings to the field Sys­tem­at­ic, ori­gin­al re­search with a sig­nif­i­cant con­tri­bu­tion to sci­en­tif­ic know­ledge
Re­search Gap Not strict­ly re­quired, but may be ad­dressed Es­sen­tial: clear dis­tinc­tion from ex­ist­ing re­search Cen­tral: in­de­pend­ent iden­ti­fi­ca­tion and ad­dress­ing of a re­search gap
Meth­od­o­lo­gi­cal Com­pe­tence Use of es­tab­lished meth­ods, usu­al­ly ba­sic em­pir­ic­al ap­proach­es Mixed meth­ods, ad­vanced em­pir­ic­al or con­cep­tu­al meth­ods Com­plex meth­od­o­log­ic­al sys­tems, of­ten in­volv­ing theo­ry-led re­flec­tion
Use of Lit­er­a­ture Sys­tem­at­ic re­view of fun­da­ment­al lit­er­a­ture; in­tro­duc­tion to sci­en­tif­ic rea­son­ing Crit­i­cal and in­depth en­gage­ment with spe­cial­ist and in­ter­na­tion­al lit­er­a­ture Com­pre­hen­sive, sys­tem­at­ic and of­ten in­ter­dis­cip­lin­ary lit­er­a­ture re­view based on cur­rent re­search
Aca­dem­ic Stand­ard Sci­en­tif­ic­al­ly sound but pri­mar­i­ly prac­tice-ori­ent­ed Theo­ry-based, well-ar­gued, aca­dem­ic­al­ly ad­vanced Re­search-based, in­nov­a­tive, theo­ret­ic­al­ly and epis­te­mo­log­ic­al­ly con­tri­bu­tive
Evi­dence & Ar­gu­men­ta­tion Re­quired: co­her­ent and prop­er­ly ref­er­enced ar­gu­ments based on lit­er­a­ture and data Re­quired: pre­cise ci­ta­tions and meth­od­o­lo­gi­cal­ly val­i­dat­ed ref­er­ences In­depth source work in­clud­ing ar­chiv­al or pri­ma­ry sources; high stand­ards of trace­a­bil­i­ty and val­id­i­ty
Per­mis­si­ble For­mats No mere sum­ma­ries No pure­ly prac­ti­cal for­mats In­de­pend­ent sci­en­tif­ic con­tri­bu­tion
Pub­lish­ing Re­quire­ment Not re­quired Pos­si­ble, ex­cept with em­bar­go Us­ual­ly pub­lic­ly avail­a­ble, some­times man­da­to­ry
In­de­pend­ence First in­de­pend­ent re­search pro­ject with close su­per­vi­sion Self-­plan­ned and con­duct­ed, with su­per­vi­sion as needed Long-term re­search pro­ject with lim­it­ed su­per­vi­sion

5. Phases of an Academic Thesis ^ top 

The path towards completing an academic thesis is rarely linear. It requires planning, methodological reflection, critical engagement with existing knowledge, and, not least, the ability to systematically structure and clearly present one's own ideas in writing. Regardless of the specific disciplinary context, the process can be divided into three interdependent and partly overlapping phases: conceptual preparation, research engagement, and editorial finalisation.

Throughout all three phases, it is essential to continuously record information, literature, methodological considerations, and personal reflections. A personal knowledge repository - for example, a research journal, a structured notebook, or a digital documentation system - not only helps trace one’s own thinking but also enables the identification and integration of links between theory, method, and findings as the work progresses.

Three phases or milestones of academic writing

5.1 Topic Selection & Concept Development ^ top 

The process begins with the selection of a thematic focus, often motivated by personal interests, societal relevance, or research gaps identified in the academic literature. From a broad subject area, an initial problem statement should be developed to specify the research interest and determine a clear direction. Even in this early phase, literature review plays a key role. By surveying relevant sources, one can assess whether the topic is viable, identify existing theoretical and empirical approaches, and pinpoint unresolved questions or controversies. Engaging with existing studies helps situate the project within the research context, guides the choice of methodology, and sharpens one’s perspective. The goal of this phase is to develop a sound research concept, usually in the form of a proposal (exposé), which includes the following elements:

  • a preliminary yet clearly defined research question;

  • defined objectives as well as explicit non-objectives to maintain focus;

  • considerations regarding data availability, methodological approach, and potential constraints;

  • a realistic timeline and work plan, including buffer periods for unexpected delays.

5.2 Research Engagement & Execution ^ top 

Once the research question has been formulated, the actual research phase begins. This second phase involves deeper engagement with the academic literature and the planning and execution of one’s own empirical investigations or analyses. The aim is to find systematic answers to the research question, drawing upon theoretical concepts, methodological expertise, and empirical data. This phase includes defining key terms, constructing theoretical frameworks, and reviewing the state of the art. These steps not only position the project within the academic discourse but also serve to justify arguments and methodological choices. Own research - whether qualitative, quantitative or mixed-methods - depends on a coherent and rigorous application of methods. This includes selecting appropriate tools for data collection and analysis, reflecting on research ethics, carefully planning data collection, and addressing uncertainty, bias, or disturbances. This phase is often iterative: New insights from literature or data analysis may require adjustment of theoretical assumptions or refinement of methodological approaches. A constant alignment between theory, empirical data, and reflection is therefore crucial.

At the end of this phase, you should have:

  • a structured literature overview with systematically documented sources (ideally supported by a reference management tool);

  • your research findings, analysed, processed, and interpreted;

  • and comprehensive text modules for the chapters on background/problem, theory, methodology, and findings.

5.3 Writing & Final Editing ^ top 

The third phase focuses on transforming research results and theoretical foundations into a coherent and formally correct academic text. This requires not only stylistic precision but also analytical clarity and editorial discipline.

This phase includes writing and revising all chapters - from the introduction and theoretical and methodological foundations to the presentation of results, discussion, and conclusion. Text coherence, logical argumentation, and internal consistency are key quality criteria. Earlier notes, quotations, and drafts serve as the basis for the final version.

Final editing involves linguistic and formal refinement. This includes:

  • proofreading for spelling, grammar, and style;

  • verifying all citations and references, including adherence to the required citation style;

  • consistent layout of text, figures and tables;

  • and where applicable, the preparation of an abstract, appendices, a declaration of originality, or a summary in another language.


6. Academic Writing = Career Advantage?! ^ top 

Academic writing is often perceived as a formal academic requirement whose relevance to professional practice is not immediately obvious. However, this perception changes when academic writing is understood not merely as a means of generating knowledge, but as a structured form of thinking and acting that remains highly applicable beyond the university context. The competencies developed through academic writing correspond closely to key demands of the workplace - especially in knowledge-intensive, planning-oriented, or analytical professions.

Academic Writing vs. Career Advantage

The following list provides insight into cross-disciplinary skills that extend far beyond the requirements of a specific thesis. While not exhaustive, it illustrates how academic writing can serve as a foundation for structured, reflective, and professional work - whether in complex project management, concept development, market analysis, or the facilitation of change processes.

  • Decision-making under uncertainty: Academic writing involves selecting from various topics, methods, or sources despite limited information. This ability to make justified decisions and reduce uncertainty systematically is central to project-based roles in consulting, management, or development.

  • Problem awareness and focus: Developing a research question requires sustained engagement with a topic over weeks or months - even in phases of uncertainty, repetition, or frustration. Such perseverance and self-motivation are essential for managing long-term professional projects.

  • Thematic depth and perseverance: Targeted research using academic databases, bibliographies, and peer-reviewed journals fosters structured information management. It also trains the ability to distinguish relevant from irrelevant sources - a core competency in today's data-driven work environment.

  • Information literacy and research skills: Formulating a clear research question requires precise problem definition, deliberate delimitation of the topic, and critical engagement with the relevant context. This capacity for abstraction and focus is fundamental to all professional analysis.

  • Analytical text processing: The ability to quickly comprehend extensive texts, identify key points, and critically assess them is vital - not only for literature reviews but also when dealing with policies, reports, studies, or contracts in professional settings.

  • Condensation and audience-appropriate communication: Academic writing demands that complex content be articulated precisely - whether in summarising study results or citing extensive sources accurately. This skill enhances professional communication - for example, in reports, presentations, or decision-making briefs.

  • Precise language and clarity of argumentation: Academic writing avoids embellishment, personal bias, or redundant phrases. It aims for clarity, traceability, and logical consistency. This ability to argue objectively is crucial in professional communication, interdisciplinary teams, and legally sensitive documentation.

  • Dealing with multiple perspectives and critical evaluation: Academic work exposes students to diverse theories, viewpoints, and lines of reasoning. The ability to compare and assess divergent approaches encourages reflective thinking and guards against premature conclusions - also in handling complex workplace situations.

  • Source criticism and transparency: Proper and consistent referencing enhances transparency and supports the credibility of arguments. In professional contexts, grounding statements and decisions in verifiable data or frameworks is equally important - e.g. in audits, compliance processes, or public communications.

  • Self-reflection and professional handling of criticism: Academic work requires engaging with feedback, questioning one's own position, and implementing improvements systematically. Constructive engagement with critique thus becomes a valuable resource for professional growth.

  • Time management and project coordination: Planning and executing a thesis trains thinking in phases, milestones, and deadlines - as well as the coordination of diverse tasks (research, analysis, writing, proofreading, etc.). This project logic is transferable to many professional roles involving planning or control responsibilities.

  • Compliance and dealing with formal requirements: Adhering to academic standards, citation rules, or formal criteria fosters an awareness of regulations - a skill of great importance in professional contexts involving norms, quality standards, or legal requirements.

Reflection Task / Activity ^ top 
Reflect on how the competences mentioned above have changed your understanding of academic work and in which professional situations you have already applied these skills or could make use of them in the future.

 

 

If not stated differently, the contents of Principles of Academic Research published on 27 July 2025 are © by Christian Huber, licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) . Reuse requires appropriate credit, a link to the licence, and an indication of any changes; you must not imply endorsement.
Assisted by AI
assisted by AI: Generative pre-trained transformers (large language models) were used for proofreading and translation. Content was reviewed before publication; Christian Huber is responsibility for accuracy and interpretation.
 
For publication details please see the Imprint.
 
For information on how personal data is processed please see the Privacy Policy.