Problems and solutions for using computer (networks) for education

Hermann Maurer (Graz University of Technology, Graz, Austria)

Journal of Research in Innovative Teaching & Learning

ISSN: 2397-7604

Article publication date: 3 April 2017

33807

Abstract

Purpose

The idea to use computers for teaching and learning is over 50 years old. Numerous attempts to use computers for knowledge dissemination under a variety of names have failed in many cases, and have become successful in others. The essence of this paper can be summarized in two sentences. One, in some niches, applications tend to be successful. Second, attempts to fully eliminate humans from the educational process are bound to fail, yet if a large number of aspects is handled well, the role of teachers can indeed be much reduced. The paper aims to discuss these issues.

Design/methodology/approach

Report on experimental results.

Findings

In some niches, applications of e-Learning technology tend to be successful. However, attempts to fully eliminate humans from the educational process are bound to fail, yet if a large number of aspects is handled well, the role of teachers can indeed be much reduced.

Research limitations/implications

A number of features that seemed essential in earlier e-Learning systems turn out to be superfluous.

Practical implications

New e-Learning systems have to concentrate on quality of content, not complex technology.

Social implications

E-Learning the right way helps learners, teachers and institutions.

Originality/value

Experiments reported verify or do the opposite of often loosely stated opinions.

Keywords

Citation

Maurer, H. (2017), "Problems and solutions for using computer (networks) for education", Journal of Research in Innovative Teaching & Learning, Vol. 10 No. 1, pp. 63-78. https://doi.org/10.1108/JRIT-08-2016-0002

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Hermann Maurer

License

Published in the Journal of Research in Innovative Teaching & Learning. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

I will try to discuss the major issues involved in knowledge dissemination using computers (often called e-Learning) following my own experiences. This certainly introduces some strong bias. However, I do not try to avoid this since I have followed and experienced most developments over the last 50 years myself. Examples and anecdotes will prove some of the points, and will hopefully be both entertaining and useful.

Let me state clearly that this is not a research paper, but a survey of developments, many of them ending up in a dead-end road. I believe that this is what makes this paper also of interest for further research and innovation by showing that certain things do not work or work only if special attempts are made. To make it evident what lessons I have learnt and what one should never forget I will put such “aspects learnt” in boldface.

There is no doubt that Programmed Logic for Automated Teaching Operations (PLATO) was the first e-Learning system using computers, and for that time quite impressive, as it was used in an early timesharing system more than 50 years ago. It originated in the 1960s at the Urbana campus of the University of Illinois. Professor Don Bitzer became interested in using computers for teaching. With some colleagues he founded the Computer-based Education Research Laboratory. Bitzer collaborated with a few engineers to design the PLATO hardware. To write the software, he collected a staff of creative eccentrics ranging from university professors to high school students, few of whom had any computer background. Together they built a system that was at least a decade ahead of its time in many ways.

It is almost impossible to exactly define what PLATO was and how it developed, since it went through an endless process of improvements over the years. The paper (Bitzer, 1986) is an attempt to recapture some developments that I will not discuss in detail. But it is worth noting how powerful the system was, and how many of the good ideas in it around 1970 were only rediscovered decades later! Basically, PLATO around 1970 was still based on black and white terminals able to display mainly written text. Terminals were connected to a central computer presenting information to users whose identity was known and whose behaviour (every key pressing) was carefully monitored. Thus, it was possible to find out where the material presented was too difficult for students at a certain stage in their studies, how long they needed to go through a section. Further, multiple choice questions allowed to understand the level of understanding. This allowed to adjust learning material created with the authoring system called TUTOR. Thus, it was recognized at this early stage that unobtrusively obtained student feedback is important to make it possible to modify material for individuals or groups of individuals and that continuing testing of the level of understanding to provide alternatives and to provide encouraging feedback are crucial issues.

It is truly amazing to see that important issues such as “student feedback is essential” have been ignored in many systems developed later, and even today often not enough feedback is collected without bothering students and often with much too little evaluation of the feedback.

In the years up to about 2000 (when the internet started to be widely available) in North America three main streams are observable:

  1. Plato’s development continued and its deployment spread, including offering terminals with graphics and particularly important messaging, between users online at the same time or allowing communication of students at least asynchronously with instructors.

  2. Systems to create sophisticated and animated graphics as part of the “courseware” were developed. One of the many such products was “Macromedia Director”, a product to develop stand-alone (typically CD based) learning material. However, note that all those many stand-alone systems did not allow to collect specific feedback, except by using tedious questionnaires!

  3. The emergence of hypertext/hypermedia systems used in timesharing systems typically in university settings provided the possibility to work highly interactively with linked material, including text, animations, graphics, even sound and video. All this was based on Ted Nelson’s vision to develop a model for creating and using linked content he called “hypertext” and “hypermedia”. Ted Nelson began implementation of a hypertext system named Project Xanadu in about 1960. However, his first and incomplete public release was finished much later, in 1998. He later worked with Andries van Dam to develop the Hypertext Editing System in 1967 at Brown University. In August 1987, Apple Computer released HyperCard, a powerful alternative to Macromedia Director, yet again with no networking facilities to speak of. Yet its impact, combined with interest in Brown University’s Intermedia and similar undertakings that seemed to provide new and better platforms for e-Learning, led to broad interest in and enthusiasm for hypertext and new media. Yet all this did not provide a serious impact except as instrument to argue for more funding. This started to change a bit with the advent of the internet, particular the web.

The development in Europe from the late seventies to the turn of the century was somewhat different. In 1969, a British Engineer Sam Fedida proposed to equip TVs (everyone had one by that time, most with a remote control pad that could be used as simple input device) with a bit of additional electronics, called “decoder”. This would be connect via phone-lines to a network of servers to allow users to retrieve information, to order anything offered and to write simple messages (akin to today’s SMS). The first nationwide systems on that basis, called officially videotex (not to be confused with videotext!), were introduced in the seventies first in Great Britain and then successively in most European countries. In Austria, my home country, I was responsible to recommend whether to also take this approach. I recommended an alternative: to use the same basic idea, but add enough electronics in the decoder to turn it into a small programmable colour-graphic computer and equip it with a full keyboard. Thus, the MUPID, a colour-graphic networkable, programmable device was born, with all programs and data stored in network of videotex servers (today it would be called “in the cloud”). Due to the fact that the network of servers was run by the nations’ telecom authorities messaging without spam was easy, senders of messages and information providers could be identified by the users, and micropayments were possible, since the amounts were just added to the monthly phone bill.

Thus, in addition to stand-alone “e-Learning Computer Labs” (at this point in time under abbreviations like computer assisted learning or instruction, computer-based teaching or training and others) videotex and MUPID offered networked variants. This is documented in Videotex (1982), Longley and Shain (1983), Maurer and Sebestyen (1984) and particularly in Maurer (1985, 1986). It allowed the use of colour and of different types of animation. Figure 1 shows four screen dumps from information downloaded with MUPID from the nationwide network. The top row shows that even full-fledged dictionaries were already available at the time, and small pages charges (1P in the example) could be collected after warning the user (a feature still missing in today’s internet!). The second row shows two game applications: it was possible to play chess synchronously and asynchronously with arbitrary many persons, at the same time engaged in a (written) chat with other players. The second row shows that games like exploring parts of the universe, including (limited) animation were possible. Indeed, one of the gaming applications proved popular to the extent that the system collapsed under its load during Xmas 1984!

However, only 50,000 MUPIDs were produced in Austria and Germany. The typical IBM PC was getting more accepted, so SW and protocols had to be adapted to PCs and to whatever networks were available, reducing the possibility to use full colour and powerful communication facilities, including central supervision of learners, feedback between learners and courseware supervisors and between learners. It is possible to argue that because of this, first attempts of truly networked e-Learning with all kinds of communication facilities were delayed by almost 20 years until the internet was starting to become accepted and inexpensive enough to allow its use on a large scale.

Nevertheless, one rather unique commercial e-Learning undertaking was started around 1986 as joint work with CDC (using an authoring tool similar to PLATOs Tutor, mainly developed by the late John Garrat) called Computer Supported Teaching of Computer Science (COSTOC) making use of colour and animation as mentioned above. At some stage over 300 one-hour lectures were available and where used in a number of labs in Europe and two in the USA: one at the University of Texas at Dallas under the directorship of Fillia Makedon, and one by the University of Denver under the directorship of the late Peter Warren. Figure 2 shows a few samples. In the first row you see the cover of a brochure on the system and next to it a multiple choice question: it is remarkable that even at this stage the system was capable of handling spelling errors, thus the wrong spelling Autria of Austria was recognized! The bottom two pictures come from a university course on sorting algorithms by Hof bauer and Maurer (1988). Particularly the left picture, showing the sifting down of a value as part of heapsort makes it clear that animation was indeed a powerful tool.

For more literature see Makedon and Maurer (1987a, b), Maurer (1987, 1988), Koegel and Maurer (1987), Makedon et al. (1987), Huber et al. (1989). This very partial list shows the immense interest in what was then called “presentation type CAI” with colour graphics, animation, and some feedback and testing facilities.

However, with the exception of Austria and Germany the COSTOC lessons could not be downloaded from a nationwide network, but at best from some university network, reducing the important feedback to courseware developers. Communication between students and tutors usually required an extra component tailored to the local circumstances and interrupting the learning stream. Hence, most efforts in e-Learning between 1985 and well past 2000 were based on stand-alone or only very locally networked groups of PCs or workstations. That is, they were limited to local e-Learning labs, or even just to e-Learning on an isolated machine with material available via some external storage device ignoring the lessons made already wit PLATO. We will discuss this period in the next section.

2. The time of e-Learning labs, and what can be learnt from them (1985-2005)

Throughout the time we are discussing now, the number of PLATO type learning environments was growing for some time[1], with significant improvements particularly on the display front. Indeed it is almost funny to observe that ever new breakthrough on the display-end caused a wave of hype: “Now we finally will see e-Learning to replace teachers!” This happened with the first NeXT Computer and the NeXTstation in 1990: they provided the first time black and white movies and audio sequences that could nicely be incorporated into presentations without extra gadgets (like Phillip’s Videodiscs). Yet, the 50,000 units of NeXT machines, about the same as the number of MUPIDs produced, are often seen as the ONE big flop of Steve Jobs. A much better picture compression using JPG (first approved by international bodies in 1992, and fully accepted in 1998) or MPG (for videos), with first workable solutions starting in 1994, again were hailed as revolutionizing teaching. The resolution of display devices got better and better.

However, all the many computer labs installed for e-Learning, even the best, did OK, but not really well. We will discuss this using the COSTOC system mentioned earlier that the author was very involved in, since is typical for what happened and whose woes are typical for what happened and is still happening today.

All e-Learning material in the period discussed was basically run either on stand-alone computers (the SW had to be downloaded via some net, or installed from some external storage device) or it was run in special labs with typically 30-60 workplaces, connected to a central server. The material was of presentation type, i.e. consisted of a number of “frames”, each frame typically containing some textual, pictorial and diagrammatic material, sometimes including animations (in the better systems the speed of animation controlled by the user), and including video or audio clips. The frames could also contain some navigational features. Typically, multiple choice questions or even question allowing textual answers could make sure that users understood the material. If necessary, this would allow to introduce some background material or conversely to omit some material already understood. Usually some simple feedback or access to FAQs would be possible.

With the exception of video and audio clips the system COSTOC allowed for all of the above, including (unique for the time at issue) textual analysis. The fact that based on the outcome of questions the material presented afterwards could be different, notions such as intelligent tutoring system (ITS) and of goal-oriented learning by structuring material in stages leading to a specified goal stated beforehand were introduced. The fact that such e-Learning systems were only partial successful can be traced to four basic issues. First, the fact that preparing well structured, good looking and useful material was far beyond trivial. Second, additional material or shortcuts were usually provided using links as introduced in hypermedia systems (and now of course omnipresent in the internet). Such links can easily be confusing leading to the “lost in Hyperspace syndrome”. Third, only very limited feedback from students to courseware developers and between students and tutors was possible, often none at all. Fourth, many psychological issues have been overlooked for a long time. I am inclined to say that this is even still true. I will address briefly all four aspects:

  1. Preparing well structured, good looking and useful material is far beyond trivial.

    Often, material provided for students was done by teachers for their students. Note that nobody writes a nice textbook for just one group of students, so why should a teacher bother to spend an exorbitant amount of work for one class or a class repeated a few times, unless there would be a financial incentive or it would at least be feasible to exchange material with other teachers. It is interesting to note that in some commercial environments financial compensation was used: this is the main reason why some large companies invested successfully in courseware and e-Learning environments. In educational settings developing courseware was rarely compensated financially or academically. Thus, except for some enthusiasm only the idea to at least be able to exchange courseware seemed attractive, yet incompatibilities between systems made this very hard to achieve. The late Eric Duval www.ae-info.org/ae/Member/Duval_Erik/ was fighting for years to achieve an international standard (Forte et al., 1997; Duval and Forte, 2008). Eventually an agreement on a standard between Europe and the USA was achieved, but developments just seemed to ignore it. To ease the burden of preparing courseware some very sophisticated techniques were developed over time, and some interesting ideas converted into reality. One is to “author on the fly”, i.e. while teaching (by recording voice and writings on a whiteboard) material for later use is created. This idea was introduced by Thomas Ottmann from the University of Freiburg. However, there were other stumbling blocks: different university teachers teaching the same course are likely to still select different topics. Hence, to share material it is necessary to build it in a very modular way, each module reusable in different contexts, an impossible task if there are links criss-crossing all over the place. This leads to:

  2. Additional material or shortcuts were usually provided using links that could easily be confusing, leading to the “lost in Hyperspace syndrome”.

    Even more to the point, most researchers in e-Learning realized the need for modularity of material. This is almost impossible in the presence of a dense network of hyperlinks. It is Nick Scerbakov from Graz University of Technology that realized that akin to the first second generation knowledge management system Hyperwave (Maurer and Tomek, 1990) and particularly (Kappe et al., 1992) one could do without links. This was refined in Maurer et al. (1995) and finalized as book in Maurer and Scerbakov (1996): this established the HM-Paradigm as a crucial concept for all who dislike the dictatorship of links:

  3. Not enough feedback facilities from students to courseware developers and between students and tutors were available as part of e-Learning set-ups. Only with the advent of universal e-mail and later social media quite a bit of communication is now possible between all parties involved, hence built-in communication facilities are not a critical issue any more.

  4. Many psychological issues have been overlooked, and are still overlooked.

I want to base this subsection on personal experiences that are typical, but will have been encountered in one way or another by anyone really dipping deep into e-Learning using e-Learning labs and hence should be observed by all who try to use e-Learning.

Before moving to Graz University of Technology end of 1977 I was Professor at the University of Karlsruhe, Germany. I hated my job to teach each year incoming students the basics of computer science and programming and such: to teach in the largest auditorium to 400+ students was an exercise in frustration. As much as I would try, behind row 25 little attention would be paid to what I would say.

I was determined to change this. I used a professional video lab to produce 44 units of 45 minutes of video each, each with an additional hour of backup. Each unit started with a joke (hoping this would make sure students would come in time). After 15 minutes there would be another joke (15 minutes of concentration is all one can expect). Some task had to be done 15 minutes later with explanations of the solution (again, to break monotony), and one more joke at the end (hoping that students would stay to see the last joke).

To tell the truth: I spent more time on preparing the jokes than the material! The backup units were for those who had the feeling they had not understood everything in the main units. That is, they consisted mainly of some extra explanations and many examples. Each unit was shown about 20 times by a tutor in small rooms (for 35 students maximum), according to a widely advertised timetable. Tutors could interrupt the video if questions were asked. The class was accompanied by tutoring sessions where homework and problems were discussed. Thus, students had little personal contact with me, but ample contact with tutors almost their age.

I am not sure that my efforts really paid off for me: I had invested lots of time, but as a consequence I did not have to teach the course any more myself, a graceful move by my Dean.

The evaluations of this system by students, and their performance, were very gratifying. I also had learnt a lot. We have to live with short periods of concentration. Think about this: at some stage teachers would have had a blackboard they would write on. When full, they had to wipe it: this was not a waste of time but a welcome interruption for students. When architects installed six movable blackboards to make it unnecessary for professors to wipe them, they were damaging the educational process!

Let me also parenthetically tell an anecdote that I hope will help to catch your attention again! When I left Karlsruhe, I was asked if they could still use my videos and the set-up I had introduced. Obviously, I had no objection. Some six years later I gave a seminar in Karlsruhe. I walked by some classrooms where my videos were shown. Then I realized that many young students looked at me in a strange way. Finally, I understood: they knew me from the videos, had never seen me in person, had considered me dead for a long time: a ghost was walking on campus!

Anyway, my positive experience with the video-instruction lab convinced me: if I do the same at my new university, but use an e-Learning lab where interactivity would be much better it would be a hit. Well, I had to learn a few things that I believe everyone reading this should know.

Courseware development was a hassle. But then there were top professors and good friends who helped out, like Ian Witten (NZ), Arto Salomaa (Finland), Thomas Ottmann (Freiburg), Lutz Wegner (Cassel), Jürgen Albert (Würzburg), etc.

Thus, I could use this material in our beautiful e-Learning lab, with a roster, where students could reserve time slots to work on one of the 40 machines. I was dismayed to see after two weeks that both lab and roster were still mostly empty. It dawned on me that since the knowledge was available till the end of term, there was no hurry to use it right now. No need to leave an evening party early, as some might have done to catch an important ordinary lecture at 8 a.m. next day! I found a trick solution: one night I filled one-third of the roster with invented names. A day later the roster was full: students suddenly panicked that they would not have enough time slots!

More to the point: if you offer material in e-Learning you have to impose rigid discipline. Maybe the best is to have small tests (exams) everyone has to take every two weeks or so, to avoid that the learning is left to the last possible moment (when lab and computer networks may not even be able to handle the load). You do not allow students to drink or eat when you give a lecture. So do not permit this in the lab, either. I am convinced you can think of further ways to make sure that students learn all the time in small increments. This is why the German Bank Academy run by my good friend Joachim Hasebrook for many years had good success with e-Learning: employees would have one day to work through some material that would be dealt with in interactive mode the next day. I will return to the book (Hasebrook and Maurer, 2004) a bit later for other reasons.

Another important lesson I learnt is that e-Learning if done right is well accepted to a certain extent, but if it used too much it is not appreciated. To be specific: the first year of the e-Learning lab in Graz with two courses taught this way was a big success. When we added two more courses the year thereafter, enthusiasm was “strangely” more muted. Adding more courses the year thereafter caused student protests: “We like to learn a bit outside ordinary lecture-halls, particularly if lecturers are bad and courseware is good. But we do not want to do this for more than a few courses”. The message seemed clear but there was also the lingering idea that the first year was only well received because the idea to learn this way was novel; and once this wore off may be even a few courses would not be acceptable? Well, this is one of the positive messages of this paper: do not worry, even if the novelty effect wears off: students are happy to use e-Learning as long as it is not overdone.

Thomas Ottmann conducted a very interesting experiment with his Freiburg lab. In a large class on data structures about 100 students used the e-Learning material, a similarly large group would attended his lectures. At the end of the term, the final exam caused some temporary jubilation of all e-Learning supporters: the students using the lab did significantly better! However, the same students who did better in the e-Learning subject did worse in other subjects. Ottmann had set up his experiment impressively sophisticated and could prove what we suspected: the e-Learning mode did not convey knowledge better, but students were attracted to this novelty and spent more time on this subject, at the expense of other subjects. The experiment was repeated by the late Peter Warren in Denver and the late Jennifer Lennon (1994) in Auckland in a modified way. Two important facts emerged: e-Learning is fairly equivalent to lectures but one has to make sure that students do not spend too much time with it. Most important, one has to make sure that e-Learning material does not waste students’ time! In particular, the idea to use some gaming facilities in e-Learning material may endear the material to students but can turn out to be quite time consuming, an argument sometimes overlooked (Guetl et al., 2005; Pirker and Gütl, 2015) unless compensated by, e.g. a competition “who is first?”.

Let me explain this with a very simple example. I have seen many e-Learning modules where learners could fire a cannon or such and have to find out, looking at the simulated flightpath what angle is optimal to achieve the largest distance. After some 15 minutes of experimenting most students end up with the (correct) guess that close to 45° is optimal. However, this is a complete waste of time, since a simple calculation will yield the proof that 45° is optimal. Let v be the velocity of the shot, x the angle of the shot and t the time that the shot touches ground, then clearly the width w of the shot is w=t.v.cos(x). The height h of the projectile at time t is of course given by h=t.v.sin(x)−(g/2).t2, where g is the gravitational acceleration. The projectile hits the ground when h= 0, i.e. when 0=t.v.sin(x)−(g/2).t2. Solving for t gives t=(2/g).v.sin(x), and plugging this into the formula for w we get w=(2/g).t2.sin(x).cos(x). To find the maximum for w we find the first derivative of w with respect to x, set it to 0 and simplify, ending up with cos(x)=sin(x), implying x=45°. Thus, rather than experimenting for some time and then hoping to have found the right answer solving a simple “min-max problem” proves that 45° is optimal. And even more: the angle is both independent of the velocity and the gravitational force!

I have seen still more boring material where students, e.g. have to practice solving half a dozen systems of linear equations: once they have understood that is done by converting the system to “triangular form” by subtracting some lines form others it is just frustrating for students that they have to perform now boring trivial manipulations many times.

There is one famous case when a number of attempts were made to help students get some practice in calculating the derivative of a function. The basic idea is that a programme (repeatedly) generates some random function f(x), the programme finds the derivative g(x) using the few simple rules for doing so, and students also compute the derivative of f(x), call it h(x) (it is the “quotient rule” and the “chain rule” that sometimes confuses students). Anyway, the computer now “only” has to check if g(x) and h(x) are identical. However, it has been shown by Matijasevic in 1968 that this is an unsolvable problem, see Rozenberg and Salomaa (1995). That is, there is no way one can write a programme that for arbitrary functions g(x) and h(x) determines if the two function are identical. Thus, all early attempts to write a programme for practicing derivations were unsuccessful. However, in Gillard and Maurer (1990), a pragmatic approach solves the problem: both functions are evaluated at 100 randomly chosen points. If their values agree in all cases it is clear that they are, very likely, identical. Cris Calude from Auckland actually showed that under mild assumptions if h(x) and g(x) agree at 100 random points the likelihood that they are not identical is about 10−50, i.e. very small indeed. Thus, for all practical purposes the method did solve the unsolvable problem and this exercise module has been used at various universities.

After this bit of excursion into mathematics let me return to one very important issues that has to be taken very serious since it is often ignored: avoid wasting students’ time. Thus, as important video clips are in some instances for instructional purposes, make sure they are not longer than is essential. If you use interactive diagrams or such, make sure you only show what is really important in compact form. Do not use pictures unless they really convey important information or unless they are used to break monotony. After all, one of the most surprising results in the book (Hasebrook and Maurer, 2004) – based on 10,000 of users! – is that multimedia material often distracts, and wastes time, rather than being helpful.

As the internet became more and more widespread the idea that material in the internet could be used for instructional purpose became clear to more and more persons. I am proud to say that together with my good friends Cris Calude (NZ) and Arto Salomaa (Finland) we started the first fully refereed free online journal in 1994: www.jucs.org. The journal was the first of its kind (free submission, free reading) and is still going strong. Just convince yourself by using the URL mentioned.

Of the many papers written on using material form the internet for educational use let me mention a few early papers: Lennon (1994), Lennon (1995), Marchionini and Maurer (1995), Maglajic and Scerbakov (1997), Dietinger and Maurer (1997), Maurer (1996). The efforts to include material in the web as extension of other learning material never stopped (Maurer and Mueller, 2011).

It also became more and more clear that e-Learning should really be seen in a wider context: knowledge management, see Maurer (1999).

3. Success and failure of e-Learning environments till about 2000

I believe I have made it clear that there are many important points one has to observe when setting up an e-Learning environment. In summary, e-Learning setups have been successful in many cases long before the turn of the century, yet those aimed at completely replacing ordinary teachers failed. I will now outline where e-Learning has proved particularly useful. The advent of new technologies after 2000 has not changed the situation dramatically as I will mention in the last Section 4.

In general, e-Learning has been more successful in companies (or high priced educational institutions) than in schools and “ordinary” universities. The main reason is that a rigid schedule, immediate tests if material has been digested, or some other imposed control or financial reason to get something for what has been paid for, makes learners use the resources whether they particularly like them and whether they are top notch or not. Also, in such environments negative feedback is taken very serious and acted upon.

In settings where there is more freedom, where work on some subject can be delayed in favour of other endeavours it requires other incentives to use e-Learning widely. However, situations abound where e-Learning is of real help. I will just mention a few typical cases.

When students have been sick, or they come from a different environment e-Learning may be the only chance to catch up. I have used courseware to make sure students come to a seminar with the necessary background. In such cases, the material will allow students to skip large parts if they know the material, but will have to invest some energy if not sufficiently familiar with it. Thus, I was able to assume that knowledge required to follow was universally available when my seminar started. I have also found e-Learning quite useful for refreshing knowledge. Persons who were quite familiar with material in some area, but not having used it for a long time, could use e-Learning to again understand all that was required.

Of course there is the old argument that e-Learning is useful because one can choose the speed one is comfortable with, and can learn when time is available: a young mother or a working individual just may not have the time to attend a lecture or such. In addition, if educational institutions are at a considerable distance much commuting time can be eliminated. Those arguments were exactly the ones used to argue for “open universities” even before much technology was available. Learning this way does require a degree of self-discipline and/or outside control.

E-Learning material can also be very useful for learning specific skills. I remember fondly that the most successful training courseware I developed was for learning how to tie knots necessary to get a sailing license. The courseware came with three pieces of strings. For each type of node it would pictorially show with pictures of the strings, step by step, how the strings have to be manipulated.

This kind of approach applies to both manual and intellectual tasks where the main aim is to learn how to follow a certain strategy step by step. Looking at today’s situation where many things can be learnt best by following YouTube clips or such it is clear that step-by-step diagrams or video clips for teaching have been underestimated a long time.

For me collaboration with a traditional board game manufacturer was particularly revealing. We wanted to teach kids the rules for a game, be it chess, a card game, some newly invented stuff, etc. The idea to write in simple language the rules for the game involved failed: parents would often not take the time to read (and explain) the rules to the kids, and by the time kids were able to read “with understanding” they would be eight or nine years for games considered for five-to-six-year olds. Even 15 years ago, we started to shift emphasis by shipping games not with a booklet explaining the rules but at that time with a video cassette that even four-year olds could slip into their TV set. The material on the videotape did not explain the rules, but showed tiny scenes of a game played by kids, good moves rewarded by smiles, bad ones by frowns or a gesture that the move was even illegal. From those bits of watching the game kids would learn quite efficiently. Videotapes were later replaced by CDs or such, and today by video clips. IKEA is still conservative enough to hand out a booklet with complex diagrams rather than some electronic material showing us how to assembly a complex piece of furniture step by step.

It became also clear that working for hours on a computer to learn something may not be ideal. But to blend it with other activities when needed (“blended learning”), particularly to integrate it into ordinary work (“learning on the job”) became more and more possible and attractive when working in a network environment, when access to needed information is readily available. Other attempts emphasized “learning by doing” or “learning by experimenting” but, as mentioned before, although this may work or even be fun it usually takes an inappropriate amount of time to achieve the desired aims. E-Learning (or better e-Teaching) can in many instances still be much improved by using ideas as mentioned above. However, there are also areas were the ideal methods to teach have often appeared long before the advent of computers. In such cases one has to carefully weigh if e-Learning is appropriate.

One interesting example is language learning. To just learn the translation of words from one language to another, for a long term the “stack technique” was used with very good results. A typical version is this. Learners get a pack of cards. One side of each card has the word or phrase in, say, English, like “young girl”, on the other side the translation into the language to be learnt, say Italian, i.e. “bambina”. The cards are placed as a stack with the English side on top in “spot one”. The learner takes card after card, each time before turning over speaking the translation aloud and writing it on slip of paper. If the answer is correct, the card is placed, again English side up, on a growing pile in “spot two”; if the answer is incorrect or unknown, the correct answer is looked at, read aloud, and the card placed on a growing pile in “spot three”. Eventually the stack in spot one is empty, the cards of the stack in spot three are shuffled, placed in spot one, and the process continues. This is repeated until all cards are in the stack at spot two, i.e. each word has been translated correctly once. Usually the stack in spot two is now shuffled, placed in spot one and the process repeated. If this is done five times each word has been translated correctly five times and will stick in memory for some time. A bit of refreshing a few days later will show how much has been retained beyond the learning session, indicating whether further refresh cycles may be advisable.

Clearly, above process can be easily carried out exactly as described by on a computer with a simple programme. Here is the surprise: many attempts have shown that the manual way produces better results. It is generally accepted that the haptic/tactile actions carried out are of more help than just using the keyboard. Why I mention this example is to point out that traditional techniques can sometimes not be carried over one-to-one to computers.

Note in passing that language labs have been around long before they were computerized, using sophisticated versions of what was described above, and particularly useful because even with today obsolete techniques like tape cassettes the important issue of pronunciation can be handled to some extent. With modern computer-based language labs this can still be improved by checking the pronunciation of the learner against what it should be (this is trickier than it might appear, since comparison of pronunciation must take into account different pitches and speed of pronunciation).

Let me finish this section by claiming that the programming work at improving and using e-Learning had its heydays in the period covered. The first large e-Learning conferences started around 1985. In 1987, International Conference on Computer Assisted Learning was first organized by the University of Calgary, later by Acadia University at Wolfville by Ivan Tomek and at University of Texas in Dallas by Fillia Makedon: the number of participants kept growing so some organization had to step in. It was Gary Marks, head of AACE www.aace.org/, who took over: it was my privilege to work with him on founding the conference series ED-MEDIA. When the participants exceeded some 1.500 we decided to split off the conference series WebNet that later turned into e-Learn. Both ED-MEDIA and e-Learn are still continuing: I pulled slowly out of both of them after 2000 for two reasons: it was time for younger blood and I had the gut feeling that the conferences were changing more and more from a strong computer-science component (my area) to a more pedagogical orientation. I was not completely wrong. The flavour of ED-MEDIA has changed enough by 2016 that this year only very few participants were computer scientists.

4. E-Learning after 2000

Due to the availability of inexpensive networks the emphasis shifted from e-Learning labs to e-Learning environments that were independent of a room full of computers. Rather, any computer connected to a network would be a device that could be used for learning and teaching, i.e. could be used for web-based teaching/training, as already visible in Helic et al. (2002, 2004).

New systems went beyond e-Learning by including course management and learning management. In modern systems, contact between learners of teachers/tutors is considered important. Feedback, even anonymous is desirable (Dreher and Maurer, 2005). Students are encouraged to make notes in learning material (Korica and Scerbakov, 2005), and even the idea of tours guiding students by a mentor emerged quite early (Helic and Scerbakov, 2001).

Even simple smartphones allow to review material with a small quiz on the way to school or the job in the public transport system (Schinagl and Maurer, 2007).

Some of the learning management systems (LMSs) started to supply quite sophisticated communication and collaboration features. The “WBT – Master” (Ebner et al., 2014; Scerbakov et al., 2015; Schaffert and Ebner, 2010) allowed students to communicate with each other and the instructors, team-shared data was available, tutoring online was offered, notes and other feedback tools were provided, etc. However, although the system served close to 100,000 students over time, many of the communication facilities were next to superfluous. Most communication was done via the same social media used otherwise. Thus, use systems for communication already known to students, do not try to do “better” by providing a new system/interface. On the other hand, it is tempting to try to apply social media also for learning, like in Ebner (2009), Ebner and Harmandic (2016).

A further agent of change was the emergence of MOOC (essentially quoting Wikipedia) “a Massive Open Online Course aimed at unlimited participation and open access via the web. In addition to traditional course material […] many MOOCs provide interactive user forums to support community interactions among students and instructors. MOOCs are a recent and widely researched development. They were first introduced in 2008 and emerged as a popular mode of learning in 2012”.

As LMS the (also open source) software Moodle https://moodle.org/ is now one of the most widely accepted.

Research in e-Learning today focusses not so much on new technology but on how material has to be prepared to be successful, and how students use it (Khalils and Ebner, 2016; Taraghi et al., 2013). This is also explained clearly in Serdyukov (2015). One system called iMooX that has proven very successful was developed by a team in Graz (Neuböck et al., 2015): a typical lesson consists of a number of clips of three to seven minutes, each clip followed by some test questions to ensure that the material has been “digested” properly. The material is very popular by students since it also comes in compact written form for those who do not want to spend time watching videos. Figure 3 shows a bit of a course on “English for Chemists” important for students in Austria, since their mother tongue is German, their English is ok but not specialized, yet all advanced courses are taught in English.

In summary, one can expect that large course repositories are here to stay and every educational institution will have to make them available. Concerning LMS, their only important communicational features are likely to be forums (allowing discussion between instructors and students) and online tutors, available via some channel during certain office hours. Otherwise, standard social media systems will be used for communication, eliminating the need to have such features built into the LMS.

5. The future of e-Learning

There are many ways to teach and to learn, with and without computers. I hope that my looking back at successful and not so successful attempts has at least shown one aspect clearly: one large homogenous system for e-Learning does not make sense. Never put all eggs in one basket.

The real challenge for innovation in e-Learning is to find the correct mix of techniques, with the mix depending much on application areas, students and scenarios.

A few short video clips followed by some test question may be nice, but then maybe just “presentation type material”, or if possible material presented by a human teacher who, for a change, is not sticking to PPTs, but captures attention by the “missionary spirit” that fascinates the leaners is one of the many, many ways to go.

Main credo: do not be boring, switch media, use competitiveness and use technology when suitable or when good as surprise. Above all: the quality of the teacher who is lecturing or who has prepared material is most important.

The two statements that the author believes are blatantly wrong are:

  1. One can make learning arbitrarily easy and entertaining. No. To get good at some sportive activity you have to work and let your body sweat a bit; to be good at some cognitive activity you have to work a and let your brain sweat a bit. That does not mean that you should ban gaming or entertainment from e-Learning, but you have to use the right amount.

  2. The (Western) world has invested hundreds of billions of dollars on computers for e-Learning, often driven by commercial interests. Let us continue to do so. No. It may well be that the same amount used for training more and good teachers would have been more effective. That does not mean that we should not (indeed we should) use computers in all educational institutions, but we have to use the right amount.

Here then is the enormous challenge we are facing: let us try to describe very large number of circumstances where learning is essential; and the let us find the right mix of approaches for each situation. To put it bluntly: many of us have believed there will be an ideal solution for e-Learning. Now we know: we have to find the ideal solution for e-Learning depending on a staggering variety of scenarios and possibilities. This is what this journal should be about.

Figures

Note

1.

The main driving force, Control Data Corporation (CDC), once one of the leaders in supercomputers, slipped into deeper and deeper financial problems. What remained, folded in 1999. This was also a big blow to PLATO development, yet COSTOC was able to use some of the know-how including the above mentioned programming genius John Garrat.

References

Bitzer, D.L. (1986), “The PLATO project at the University of Illinois”, Engineering Education, Vol. 77 No. 3, pp. 175-180.

Dietinger, T. and Maurer, H. (1997), “How modern WWW systems support teaching and training”, Proceedings of ICCE, pp. 37-51.

Dreher, H. and Maurer, H. (2005), “Anonymous feedback in e-Learning systems”, Proceedings E-Learn, pp. 2019-2025.

Duval, E. and Forte, E. (2008), “On the role of technical standards for learning technologies”, IEEE Transactions on Learning Technologies, Vol. 1 No. 4, pp. 229-234.

Ebner, H. (2009), “Interactive lecturing by integrating mobile devices and micro-blogging in higher education”, Journal of Computing and Information Technology, Vol. 17 No. 4, pp. 371-381.

Ebner, M. and Harmandic, S. (2016), TwitterSuitcase – How to Make Twitter Useful for Event/Lecture Participants, Learning Environments: Emerging Theories, Applications and Future Directions, Nova Publisher, pp. 175-196.

Ebner, M., Maurer, H. and Scerbakov, N. (2014), “New features for e-Learning in higher education for civil engineering”, The Journal of Universal Science and Technology of Learning, Vol. 6 No. 2, pp. 93-106.

Forte, E., Wetland-Forte, M. and Duval, E. (1997), “The Ariadne project: knowledge pools for computer based and telematics supported classical, open and distance education”, European Journal of Engineering Education, Vol. 22 No. 1, pp. 61-74.

Gillard, P. and Maurer, H. (1990), “Tiny CAI tools – giving students ‘the works’”, Journal of Microcomputer Applications, Vol. 13, pp. 337-345.

Guetl, C., Dreher, H. and Williams, R. (2005), “Game-based e-Learning applications of e-tester”, ED-Media, pp. 4912-4917.

Hasebrook, J. and Maurer, H. (2004), Learning Support Systems for Organizational Learning, World Scientific Publishing Co., Singapore.

Helic, D. and Scerbakov, N. (2001), “Mentoring sessions: increasing the influence of tutors on the learning process in WBT systems”, Proceedings WebNet, pp. 515-519.

Helic, D., Maurer, H. and Scerbakov, N. (2002), “Implementing complex web-based training strategies with virtual classrooms”, Proceedings E-Learn, pp. 426-432.

Helic, D., Maurer, H. and Scerbakov, N. (2004), “Delivering relevant training objects to personaldesktop with modern WBT-systems”, International Journal on e-Learning, Vol. 3 No. 4, pp. 42-50.

Hofbauer, P. and Maurer, H. (1988), “Sorting techniques”, COSTOC Antology, Vol. 7.

Huber, F., Makedon, F. and Maurer, H. (1989), “Hyper-COSTOC: a comprehensive computer-based teaching support system”, Journal of Microcomputer Applications, Vol. 12 No. 4, pp. 293-317.

Kappe, F., Maurer, H. and Scerbakov, N. (1992), “Hyper PC – a new training tool and its integration in a large hypermedia system”, Proceedings ETTE, Paris, pp. 271-282.

Khalils, M. and Ebner, M. (2016), “What massive open online course (MOOC) stakeholders can learn from learning analytics?”, in Spector, M., Lockee, B. and Childress, M. (Eds), Learning, Design, and Technology: An International Compendium of Theory, Research, Practice, and Policy, Springer, Heidelberg, pp. 1-30.

Koegel, J. and Maurer, H. (1987), “A rule-based graphics editor for presentation CAI”, Proceedings of the 2nd Rocky Mountain Conference on AI, pp. 133-142.

Korica, P. and Scerbakov, N. (2005), “Extending annotations to make them truly valuable”, Proceedings E-Learn, pp. 2149-2154.

Lennon, J. (1994), “Lecturing technology: a future with hypermedia”, Educational Technology, Vol. 34 No. 4, pp. 5-14.

Lennon, J. (1995), “Digital libraries as learning and teaching support”, The Journal of Universal Computer Science, Vol. 1 No. 11, pp. 719-727.

Longley, D. and Shain, M. (Eds) (1983), MUPID – The Microsoft Users Handbook, MacMillan Press, London, pp. 143-145.

Maglajic, S. and Scerbakov, N. (1997), “Customization of educational material delivered over the internet”, Proceedings ED-Media, pp. 659-664.

Makedon, F. and Maurer, H. (1987a), “CLEAR – Computer Learning Resource Centers”, Proceedings of IFIP Conference on Teleteaching Budapest, North-Holland Pub. Co., pp. 93-106.

Makedon, F. and Maurer, H. (1987b), “COSTOC – Computer Supported Teaching of Computer Science”, Proceedings of IFIP Conference on Teleteaching Budapest, North-Holland Pub. Co., pp. 107-119.

Makedon, F., Maurer, H. and Ottmann, T. (1987), “Presentation type CAI in computer science education at university level”, Journal of Microcomputer Applications, Vol. 10, pp. 283-295.

Marchionini, G. and Maurer, H. (1995), “Digital libraries as components of modern computer supported learning environments”, Proceedings ED-Media, pp. 413-417.

Maurer, H. (1985), “Authoring systems for computer assisted instruction”, ACM Annual Conference, Denver, pp. 551-561.

Maurer, H. (1986), “Nationwide teaching through a network of microcomputers”, Proceedings IFIP, Dublin, pp. 429-432.

Maurer, H. (1987), “Presentation type CAI for classroom and lab use at university level”, Proceedings ICCAL, Calgary, pp. 27-29.

Maurer, H. (1988), “A report on the COSTOC project”, EATCS Bulletin, Vol. 35, pp. 48-53.

Maurer, H. (1989), “A heterogeneous data-base with hyper-navigation as new paradigm for CAI”, UNESCO – Conference Education and Informatics, pp. 476-481.

Maurer, H. (1996), “Late: a unified concept for a learning and teaching environment”, The Journal of Universal Computer Science, Vol. 2 No. 8, pp. 580-595.

Maurer, H. (1999), “The heart of the problem: knowledge management and knowledge transfer”, Proceedings Enable, pp. 8-17.

Maurer, H. and Mueller, H. (2011), “How to use the web’s information flood for teaching”, Proceedings ED-Media, pp. 3103-3108.

Maurer, H. and Scerbakov, N. (1996), Multimedia Authoring for Presentation and Education – The Official Guide to HM Card, Addison-Wesley.

Maurer, H. and Sebestyen, I. (1984), “An innovative project in telematics”, IFIP Information Bulletin, Vol. 18, pp. 6-7.

Maurer, H. and Tomek, I. (1990), “Hypermedia in teleteaching”, Proceedings IFIF Congress, North Holland Pub. Co., Amsterdam, pp. 1009-1015.

Maurer, H., Scerbakov, N. and Schneider, A. (1995), “HM-card: a new hypermedia authoring system”, Multimedia Tools and Applications, Vol. 1, Kluwer Academic Publishers, pp. 305-326.

Neuböck, K., Kopp, M. and Ebner, M. (2015), “What do we know about typical MOOC participants”, Proceedings Emoocs Conference, pp. 183-190.

Pirker, J. and Gütl, C. (2015), “Virtual worlds for 3D visualizations: advancing physics learning through traversing a multi-modal experimentation space”, Proceedings International Conference on Intelligent Environment, Vol. 19 p. 373.

Rozenberg, G. and Salomaa, A. (1995), Cornerstones of Undecidability, International Series in Computer Science, Prentice-Hall.

Scerbakov, A., Ebner, M. and Scerbakov, N. (2015), “Using cloud services in a modern learning management system”, Journal of Computing and Information Technology, Vol. 23 No. 1, pp. 75-86.

Schaffert, S. and Ebner, M. (2010), “New forms of and tools for cooperative learning with social software in higher education”, in Morris, B.A. and Ferguson, G.M. (Eds), Computer-Assisted Teaching: New Developments, Nova Publisher, pp. 151-165.

Schinagl, W. and Maurer, H. (2007), “E-quiz – a simple tool to enhance intra-organisational knowledge management, elearning and edutainment training”, Proceedings E-Learn, pp. 1080-1088.

Serdyukov, P. (2015), “Does online education need a s special pedagogy?”, Journal of Computing and Information Technology, Vol. 23 No. 1, pp. 61-74.

Taraghi, B., Grossegger, M., Ebner, M. and Holzinger, A. (2013), “Web analytics of user path tracing and a novel algorithm for generating recommendations in Open Journal Systems”, Online Information Review, Vol. 37 No. 5, pp. 672-691.

Videotex (1982), “Videotex makes dramatic breakthrough”, Viewadata Conference, London, pp. 135-143.

Further reading

Dietinger, Th (1998), “GENTLE – (General Networked Training and Learning Environment)”, Proceedings ED-Media, pp. 274-280.

Scerbakov, A. and Scerbakov, N. (2015), “A method for quantitative evaluation of university lecturing”, Proceedings International Quality Conference, pp. 375-381.

Scerbakov, N. (2013), “Integration of modern LMS and popular cloud services”, Proceedings ED-Media, pp. 1124-1130.

Corresponding author

Hermann Maurer can be contacted at: hmaurer@iicm.tu-graz.ac.at

Related articles