Skip to content

1. Introduction

The way industry develops software has changed fundamentally over the past decade. A large part of that change has been captured by the term DevOps. Kim et al. 1 describe it as a set of principles that combine automation, continuous integration, continuous delivery and tight collaboration between development and operations of software products. That this is not merely a rhetorical label has been shown by the DORA (DevOps Research and Assessment) research: organisations that have actually adopted these practices deploy to production roughly two hundred times more often than the rest, and the time from a change being approved to it being deployed is about a hundred times shorter 2.

University teaching has yet to catch up with this reality. López-Fernández et al. 3 showed in a systematic mapping study that the centre of gravity of software engineering courses still lies in the theoretical layer; further important areas - operations, build and deployment automation, continuous integration and systematic code review - tend to be covered only superficially, if at all. Bobrov et al. 4 add a symptom that practitioners know all too well: teachers simply have no tool that would let someone watch, over the course of a semester, whether students are actually learning to follow DevOps practices.

The topic I address falls right into this gap. Specifically, it concerns two courses taught at the Department of Computers and Informatics of the Technical University of Košice: Software Engineering Fundamentals and DevOps Fundamentals. In both, the teacher assigns a team project on GitLab; the team gets clear instructions on what the workflow should look like, but during the semester there is essentially no mechanism to verify that they actually follow them. The symptoms are always varied: one team does not bother with issues, another has them but links them to branches incorrectly, a third pushes changes to the main branch without tests, a fourth lets the continuous integration pipeline stay red for a whole week. Within any reasonable amount of time it is impossible for the teacher to catch all of this for every team, even though such per-team questions form the very core of what is expected from a merge request and a code review.

At present the teacher is forced to check these aspects manually in the GitLab interface for each team and each student individually. In the context of the SEF and DevOps Fundamentals courses, which together cover 17 study groups and more than 500 enrolled students, this is technically infeasible. As a result the teacher cannot provide timely, targeted feedback, which weakens the effectiveness of teaching. Yet students will only properly learn DevOps practices when they apply them repeatedly and receive constructive feedback on the quality of their work 5. I have observed this problem first-hand: during my bachelor studies I took the Software Engineering Fundamentals course and personally experienced the situation where feedback on adherence to DevOps practices arrived with significant delay.

The seriousness of the problem is underlined by the gap between what the software industry expects from graduates and what the current curricular model can deliver. Garousi et al. 6, in a large meta-analysis of 33 studies, showed that the average gap between graduates' knowledge and the expected level of competency in the integration and operational aspects of software development is among the largest in the entire engineering curriculum. Pang, Hindle and Barbosa 7, in a qualitative grounded-theory study, concluded that the main obstacle to teaching DevOps is not a lack of theoretical content but the absence of tools that would allow teachers to directly observe team behaviour during the semester. The solution proposed in this thesis fits exactly into this gap.

1.1 Motivation

There are several reasons why this problem is worth tackling. The text below describes the specific shortcomings I observed in both courses, together with the benefits that addressing them would bring.

The first shortcoming is the disproportionate workload on the teacher. To find out how a particular team works in GitLab the teacher must go through commit history, open and merged merge requests, review comments, pipeline status and several other artefacts. At the scale at which the courses are run today (SEF: 9 lab groups and 125 students; DevOps Fundamentals: 8 project groups and 382 students), this kind of inspection is no longer realistic. In practice this means most students receive feedback with significant delay, or do not receive it at all.

The second serious shortcoming is the lack of immediate feedback for students. Research in computing education has shown that effective learning of programming and engineering practices requires fast, concrete feedback 8. Hattie and Timperley 9, synthesising several earlier meta-analyses, argue that feedback focused on a specific task and the way of solving it is among the strongest factors influencing learning outcomes, while its effect drops rapidly with increasing time delay from the activity in question. The delay typical of manual checks of DevOps practices is therefore not merely an organisational problem but directly weakens the pedagogical effect of the whole course.

The third shortcoming is the impossibility of objective and comparable assessment. Without standardised metrics, the assessment of compliance with DevOps practices is necessarily subjective. Different teachers may grade the same team differently, threatening the fairness of the assessment. Leite et al. 10 argue that measurability and observability are key pillars of DevOps culture; consequently, the assessment of DevOps adherence should also rest on measurable metrics.

The fourth is the risk of cheating and surface-level compliance. Without automated detection it is hard for the teacher to identify situations in which students formally fulfil the requirements without truly understanding the practices - for example by creating trivial merge requests, writing generic review comments, or concentrating all activity into the last hours before the deadline. It is well established that concentrating commits into a narrow window before a deadline correlates with increased code defect rates 11, which holds for industrial practice as well as for student teams.

Finally, and most fundamental from the standpoint of teaching quality, is the lack of repeated, structured practice. Ericsson, Krampe and Tesch-Römer 12 showed that acquiring expert skill requires deliberate, repeated practice with immediate correction of mistakes. The current course organisation provides clear assignments and clear criteria, but without tool support the cycle "I try -> I get feedback -> I fix it" disintegrates into long stretches in which the student works without knowing whether they are heading in the right direction.

The expected contribution of solving the stated problem is the design and implementation of a set of supporting tools, including a web-based platform, that will automatically monitor the activity of student teams in GitLab via webhooks13, evaluate compliance of their workflow with defined DevOps rules, provide the teacher with a clear dashboard, and generate structured assessment reports.

1.2 Task formulation

From the problem described above flow specific tasks that I address in this thesis. First, I need to understand thoroughly how the Software Engineering Fundamentals and DevOps Fundamentals courses are organised today, what they cover and where manual assessment has its largest blind spots. On these findings I build the design and implementation of tools that integrate with the GitLab platform and take over part of the routine checks of DevOps practices, focusing on those checks that are hardest for the teacher to sustain. The finished solution is not meant to remain a prototype, however, so within this thesis I actually deploy it into operation and collect feedback from teachers and students live during the semester. Their comments and my own observations feed into a further round of improvements; finally, in the conclusion I evaluate the impact of the deployment on the organisation of teaching and on the quality of feedback.


  1. Kim, G., Humble, J., Debois, P., Willis, J., Forsgren, N. The DevOps Handbook (2nd ed.), IT Revolution Press, 2021. 

  2. Forsgren, N., Humble, J., Kim, G. Accelerate: The Science of Lean Software and DevOps, IT Revolution Press, 2018. 

  3. López-Fernández, D. et al. Teaching DevOps: A Systematic Mapping Study, IEEE Access, 2021. 

  4. Bobrov, E. et al. Teaching DevOps in Academia and Industry: Reflections and Vision, DEVOPS 2019. 

  5. Christensen, H. B. Systematic Testing Should not be a Topic in the Computer Science Curriculum!, ITiCSE 2003. 

  6. Garousi, V. et al. Software-engineering education: A systematic mapping study and a thematic synthesis, IST, 2019. 

  7. Pang, C., Hindle, A., Barbosa, D. Understanding DevOps Education with Grounded Theory, ICSE-SEET 2020. 

  8. Krusche, S., Bruegge, B. Continuous Software Engineering Education, ICSE-SEET 2018. 

  9. Hattie, J., Timperley, H. The Power of Feedback, Review of Educational Research, 2007. 

  10. Leite, L. et al. A Survey of DevOps Concepts and Challenges, ACM Computing Surveys, 2019. 

  11. Eyolfson, J., Tan, L., Lam, P. Do time of day and developer experience affect commit bugginess?, MSR 2011 / EMSE 2014. 

  12. Ericsson, K. A., Krampe, R. T., Tesch-Römer, C. The Role of Deliberate Practice in the Acquisition of Expert Performance, Psychological Review, 1993. 

  13. https://docs.gitlab.com/ee/user/project/integrations/webhooks.html