- Andragogia (28)
- Aprendizagem Organizacional (18)
- Aula Expositiva (41)
- Avaliação da Aprendizagem (96)
- Avaliação de Programas e Instituições (18)
- Avaliação do Professor (39)
- Capacitação de Professores (126)
- Coletando Informações via Internet (10)
- Construtivismo (44)
- Cérebro e Aprendizagem (164)
- Educação na Sociedade de Informação (106)
- Ensino a Distância (146)
- Ensino e Aprendizagem (139)
- Ensino não Diretivo (22)
- Metodologia da Pesquisa (201)
- Métodos Grupais (64)
- Métodos de Ensino (52)
- Pesquisando Bibliografia via Internet (73)
- Planejamento Acadêmico (61)
- Qualidade no Ensino Superior (32)
- Tecnologia Instrucional (136)
- Universidade no Brasil (109)
em Metodologia da Pesquisa
Janet Ilieva; Steve Baron; Nigel M Healey
1 July 2002 -International Journal of Market Research
1 July 2002 -International Journal of Market Research
In a recent article on conducting international marketing research in the twentyfirst century (Craig & Douglas 2001), the application of new (electronic) technology for data collection was encouraged. Email and web-based data collection methods are attractive to researchers in international marketing because of low costs and fast response rates. Yet the conventional wisdom is that, as some people still do not have access to email and the Internet, such datacollection techniques may often result in a sample of respondents that is not representative of the desired population. In this article we evaluate multimode strategies of data collection that include web-based, email and postal methods as a means for the international marketing researcher to obtain survey data from a representative sample. An example is given of a multimode strategy applied to the collection of survey data from a sample of respondents across 100 countries.
The self-completed postal or mail survey is a recognised form of data collection in marketing research (Dillman 1978). There are welldocumented practical problems with this form of data collection: poor response rates, slow response, and manual transcription of data from a hard copy questionnaire to an appropriate statistical analysis tool. Nonresponse and data entry errors may result. Consequently, research into online data collection methods increased significantly during the late 1990s. This was preceded by (1) a growing number of Internet and email users, which started to mirror the general population in some countries (Kehoe et aL 1998), and (2) various computer-assisted data collection techniques, such as Computer-Assisted Personal Interviews (CAPI) and Computer-Assisted Telephone Interviews (CATI). Investigation into the validity of online data collection has been grounded mainly in comparisons between online surveys and mail surveys (Schaefer & Dillman 1998; Stanton 1998; Sheehan & Macmillan 1999). The need for mastering new tools, incorporating the latest technology in data collection, has been identified by Craig and Douglas (2001). They advise that international marketing researchers will need to broaden their capabilities in order to design, implement and interpret research in the twenty-first century.
However, a sample of respondents with Internet/email access may not be representative of certain populations. To overcome this problem, Schaefer and Dillman (1998) suggested a multimode strategy of data collection, i.e. approaching them via email and by post/mail. This paper is an empirical investigation into this strategy, which was sucessfully applied in a survey covering 150 countries. The study argues that different modes of online and postal surveys complement each other and there are increased advantages in them being used together rather than applied separately.
The paper begins with a review of the online surveys literature and builds a 'bridge' between the theoretical advances of academic researchers and the practical conclusions suggested by experts providing email and web survey services. A comparison is made between online surveys and postal surveys in terms of response rates, data quality, response time and financial resource implications, and a suggestion is made on how to mix the email and web-based modes of approach in order to optimise the advantages of online data collection techniques. An online survey, based on empirical research carried out into central bank independence (CBI), which covered responses from 100 countries, is used for illustration. It mirrors the theoretical literature section in terms of structure and the way the advanced theoretical studies were implemented in the survey. Finally, limitations of online surveys, and weaknesses of online self-administered questionnaires are outlined, as well as some possibilities of how they may be dealt with in the future.
Email and web-based surveys: a literature review
There are many similarities between online and postal surveys, stemming from the common methodology of self-administered questionnaires (referred to hereafter as SAQs). However, they differ in the means through which they have been carried out. Depending on the design, surveys can be conducted through email or they can be posted on the web and the URL provided (password is optional depending on the nature of the research) to respondents who have already been approached. When a wide audience is targeted, the survey can be designed as a pop-up survey, which would appear as a web-based questionnaire in a browser window while users are browsing the respective websites. The latter type of online surveying has been considered as 'the most positive contribution to website research in the brief history of Internet Research' (Comley 2000).
Introducing email and web-based surveys
Although online surveys have been used mainly for website research, academics and practitioners from mainstream disciplines such as marketing are showing an increased interest in applying online data collection techniques. A recent web-based survey research project (Ray et al. 2001) shows the following discipline demographics: marketing (70%), information systems (27%), management (2%) and economics (1%). Significant in this research was the ratio of respondents conducting an online survey for the first time (39%), which indicates a growing number of people employing this mode of data collection.
Figure 1 shows the technological 'evolution' of some standard data collection methods (postal surveys, telephone interviews and personal interviews) and their upgraded 'twin'.
A significant advantage of email surveys is the speed of data collection (see Table 1). This is at very low cost to the researcher (no postage and printing costs and no involvement of interviewers), and instant access to a wide audience, irrespective of their geographical location, which makes it very appropriate for cross-sectional studies and/or international comparisons. A web-based survey is appropriate for a wide audience, where all the visitors to certain websites have an equal chance to enter the survey. However, the researcher's control over respondents entering the web-based survey is lower than for email surveys. Another advantage of web-based surveys is the better display of the questionnaire, whereas email software still suffers from certain limitations in terms of design tools and offering interactive and clear presentation. However, these two modes of survey may be mixed (i.e. multimode approach), combining the advantages of each.
The apparent disadvantage of SAQs is the comparatively low response rate (30% is considered reasonable: Saunders et al. 1997). However, there are some significant differences in the response rates of web surveys and email surveys. Comley (2000) summarises the response rates of all virtual surveys in 1999, and most were in the range 15% to 29%. On the other hand, a report on email surveys by Virtual Surveys Ltd (2001a) shows that the response rate to email surveys varies between 25% and 50%,1 which is significantly higher than web-based surveys. Schaefer and Dillman (1998) had a 58% response rate to their email survey, which was not very different from the mail survey (57.5%). Different results were obtained by Wygant and Lindorf (1999) in their study that compared results from email with those from a mail survey. The response rate for the electronic survey was much higher (50%) than the mail survey (32%). Ray et al. (2001) summarised the response rates in their survey as follows, in the respective sectors: 40.8% (academic), 31.3% (general web) and 19% (business). Runham (1999) achieved up to 80% response rate in a webbased survey, where according to Comley (2000) the user was 'hi-jacked and the survey appeared in the main browser window'.
Much of the research on mail surveys may be applied to online data collection. Dillman (1978, 1991) acknowledges the importance of 'personalisation' for good response rate, where a letter addressed to a specific individual shows the respondent's importance. These techniques may be applied successfully in email surveys (Schaefer & Dillman 1998). Martin (1994) acknowledges the impact of topic salience on mail survey response behaviour, demonstrating that there is a positive relationship between them. Sheehan and McMillan (1999) explore the same issue for e-mail surveys and their findings support the above relationship, i.e. response rate increases with increase of the issue salience. Furthermore, respondents' interest in the researched topic provides an attractive incentive for participation in the survey in terms of promise of the survey results. Ray et al. (2001) found that in 57% of the conducted web-based surveys, respondents have been promised survey results as an incentive to participate. In about 36% of the surveys, respondents were promised inclusion in a draw/raffle to encourage participation. On the practitioners' side, Comley's (2000) findings suggest that incentives have little impact on response rate in online surveys (+10% for email surveys). However, incentives may have a negative impact on data quality as respondents can be tempted to distort data by, for example, entering the survey several times to increase their chances to win a prize. Virtual Surveys Ltd (2001b) encountered a case where the respondent filled out a pop-up survey 750 times in order to increase the chances of winning the prize. Furthermore, incentives may also encourage some people to enter irrelevant information in order to complete the survey and enter the raffle/draw. This is more likely to occur when respondents are selected on a random choice basis and the survey is a web-based one, which is associated with less researcher control of the people entering the survey. It is less likely to happen if the survey is conducted through email and respondents' entry depends on a personalised invitation to participate.
Comley (2000) lists several factors (in order of importance) affecting the response rate when conducting pop-up virtual surveys: (1) style of the survey's first page, (2) relationship with the website/brand, and (3) interest/ relevance of the survey. The author acknowledges the importance of the length of the first page of the survey and the response rate, as expressed by the formula:
Response rate (%) = 40% - (8% x number of screens in the first page)
where the number of screens is defined by the number of scroll-downs to view the complete page.
A significant disadvantage of email surveys relates to the confidentiality of the participants in the survey. Mail surveys give respondents the choice of being anonymous, whereas emails always disclose the sender's identity. Perhaps response rates would be higher if respondents' anonymity was somehow guaranteed beforehand.
Short response time is certainly one of the greatest advantages of online surveys. online surveys allow messages to be delivered instantly to their recipients, irrespective of their geographical location. The same applies to the speed of the response. Responses to online surveys reported in different surveys took under a month. Ray et al. (2001), in their survey of online surveys, found that 34% of the online surveys took under two weeks, 33 % between two weeks and one month and 33% longer than one month. For example, Wygant and Lindorf's (1999) email survey took two days for 80% of the final responses to be received, and Schaefer and Dillman's (1998) survey took 9.16 days. Most comparative research done on online and postal surveys concludes that response time of the latter is longer than that of the former (Mehta & Sivadas 1995; Tse et al. 1995; Bachman et al. 1996; Weible & Wallace 1998). The difference in the response time and response rates in electronic and mail surveys is summarised in Table 1.
Response time in email surveys is shorter (average response time 5.59 days) than the time necessary for mail surveys (12.21 days average response time). Sheehan and McMillan (1999) explored the impact of prenotification of a survey on the speed of the responses, and the positive relationship was partially supported by their findings. A possible reason for delayed responses may be the timing of a survey. For example, when email surveys are conducted during the summer, some email users (e.g. university members) check their email accounts less frequently. We would expect the response time to fall in future, with an increase in email users and in the frequency of checking email as well as of the time respondents spend online.
The response time of web-based surveys is somewhat controlled by the researcher conducting the survey. It depends on the length of time the survey is posted on the web. Virtual survey experts suggest that virtual surveys should run for at least one week (usually two), thus allowing visitors to the respective website enough time to participate in the survey (Virtual Surveys Ltd 2001b). They advise surveys to be run every month/ quarter for a short period of time rather than continuously when tracking studies are required.
Financial resource implication
Online surveys have minimal financial resource implications and the scale of the survey is not associated with finances, i.e. large-scale surveys do not require greater financial resources than small surveys. Expenses related to self-administered postal surveys are usually shaped in outward and return postage, photocopying, clerical support and data entry, none of which is associated with online surveys. Furthermore, the respective questionnaire can be programmed so that responses can feed automatically into the data analysis software (SPSS, SAS, Excel, etc.). This adds to the time-saving advantages of online surveys on the one hand and avoids all the data input (and associated transcription errors) on the other. Wygant and Lindorf (1999) estimate that the budget for electronic administration equalled onesixth of the cost of the mail administration, which allowed two researchers within a year to conduct 27 electronically administered survey projects, with delivery of over 50,000 questionnaires to 35,000 people.
The inexpensiveness of online surveys has been confirmed in a wide range of studies (Mehta & Sivadas 1995; Schaefer & Dillman 1998; Kent & Lee 1999; Schuldt & Totten 1999; Sheehan & McMillan 1999). The respondents and researchers do, however, share the 'communication' cost - a cost that is otherwise covered fully by the institution undertaking the research (e.g. with self-addressed mail questionnaires or telephone interviews). The disk space is also additional to the telephone costs that the respondents pay, as outlined by Ranchhod and Zhou (2001). This is relevant for email surveys only, where respondents have to download questionnaires in order to complete them. Provided that the respondents go for a certain bandwidth of unlimited Internet access, no financial implications are expected to occur that relate to their participation in online surveys.
It is not yet clear whether people react differently to e-mail and mail surveys, as they are both SAQ techniques and rely on respondents' comprehension of written text. Schaefer and Dillman (1998) conclude that e-mail surveys provide more detailed and comprehensive information than mail surveys. Their study on developing e-mail methodology shows that 69.4% of e-mail respondents completed 95% of the survey whereas only 56.6% of mail respondents completed 95% of the survey. Furthermore, email participants provided answers to the open-ended questions with 40 words on average, whereas the mail respondents' answers were briefer, with ten words on average. The hypothesis that email and web-based surveys provide more complete information is supported by research conducted independently by different authors (Mehta & Sivadas 1995; Bachman et al. 1996; Stanton 1998). The online approach implies some psychological dimensions rooted in the respondents' anonymity. Respondents' candour is optimised when respondents' anonymity is guaranteed (King & Miles 1995; Stanton 1998), and anonymity is a problem with email. However, to date little research has been carried out into respondents' beliefs about their anonymity when using the web, and how sincere they are when filling-in online questionnaires.
In summary, the major difference between online surveys and the conventional mail surveys stems from the technology employed when conducting them, and therefore the advantages offered by the online surveys are technology-driven. They will increase with technological improvements (e.g. shorter response time) and with better data quality from more frequent use of the email and Internet. The methodologies behind online and postal surveys are very similar and, irrespective of the variety of survey-based applications employed in data collection, they are still survey methodologies. Therefore, online surveys are more a new mode of data collection than a new data collection method.
Central Bank Independence survey
The example shown in Figure 2 is drawn from a survey which covered CBI in 150 countries, and was carried out at the end of 1999 and the beginning of 2000. It is used here to demonstrate, to international researchers, the potential of carrying out global surveys. The respondents approached were members of the research department of the central bank, and/or central bank board members. The survey contained an index-based questionnaire consisting of 23 questions, which aimed to measure the independence of the central banks from the central government in terms of decision-making and conduct of the monetary policy. Central bank institutions in approximately 150 countries were approached with 100 actually participating in the survey (a response rate of 66%). The exemplar questionnaire used binary coding ('Yes' or 'No'-based questions). However, depending on the nature of the research, other options may be used (e.g. Likert scale, multiple choice answers or open questions).
During the online stage of the survey, central bankers were approached via email, containing a questionnaire in Hypertext Markup Language format (referred to hereafter as HTML), and also in text format in case respondents were not comfortable with HTML.
The survey applied the multimode approach, advanced by Schaefer and Dillman (1998). The use of email has been shown to generate higher response rates than web-based surveys (Comley 2000). On the other hand, the format of an email survey can be cumbersome to follow, which might discourage some respondents from answering (Schaefer & Dillman 1998). Web-based surveys can be designed to appear on the whole screen, enabling respondents to follow the sequence of questions more easily due to their interactive nature (Dillman & Tortora 1998). The CBI questionnaire was emailed as an HTML file and displayed in the browser's window (see Figure 2), combining the aforementioned advantages of e-mail and web-based surveys, and making the presentation of the questionnaire much clearer and more convenient for respondents to fill in answers to the questions. One rationale behind this approach was to save the respondents' time given the nature of their work.
Respondents' 'personalisation', suggested by Schaefer and Dillman (1998), depended on the contact details posted on the central banks' websites. The people in charge of the research in the respective central bank were approached by mail when their email contact details were not disclosed. The 'issue salience', as articulated by Sheehan and McMillan (1999) and Martin (1994), was strengthened by the relevance of the theme of central bank independence in many countries. This theme became a political preoccupation in the 1990s when many governments started granting their central banks greater independence, following a series of theoretical and empirical studies, which suggested that greater central bank independence reduces the inflationary bias. The issue of salience appeared to be crucial during the 'triangulation' phase of the survey, when respondents' data were triangulated with responses provided by independent experts based at academic institutions. The response rate for the academics, however, was significantly lower (only 21 countries responded out of 150), presumably reflecting their lower level of interest in the subject.
The survey required specialists in central banking to participate and therefore a web-based survey was not appropriate (due to the researcher's lack of control over the respondents entering the survey). Therefore the specialists were approached through email. The response rate was 100% when central bankers were personally approached on the email and 0% when public enquiries departments were approached. This demonstrates the importance of identifying the `right person' who should be approached to participate in the survey and avoiding the 'gatekeeper'. However, when no electronic contact details are provided on the web, the 'right person' in the company may always be approached by post. This suggests the possibility of incorporating a multimode approach - online and mail approaches can be employed in the same survey instead of perceiving them as rival data collection techniques.
The online part of the CBI survey took four weeks. The emails were not sent out on the same day, but in different weeks, which explains the overall long period of the online survey. It took about a week to send out all the emails and to mail letters using addresses from one and the same database. However, most of the electronically sent questionnaires came back within two days. Presumably, even this response time would have been shorter if the respondents had been located in the same time zone. Central bankers, defined as the `right person' to approach without specified email addresses, were approached by post. The postal survey took longer than nine months for all the completed questionnaires to be sent back. It is worth mentioning that some of the letters travelled more than three months one way, which significantly increased the time necessary for the survey results to come back. This period involved additional communication on clarifying the nature of the survey. An interesting observation, indicating web-awareness of respondents approached via mail, was that several of them, having received the postal questionnaire, emailed back requesting its electronic version. The response was then provided through the email.
Personalised letters had a twofold implication in this survey: they resulted in increased response rate on the one hand, and, on the other, they increased the confidence that the right person had responded. In cases where a 'non-personalised' letter was sent out (e.g. addressed to the head of the research), the person who responded could always be checked on the respective central bank's web database.
A major concern in online surveys regarding the validity of the data collected on the web stems from the sampling frame (Ray et aL 2001), which is represented predominantly by a computer-literate population rather than 'appropriate' for the survey sample (where cases are not related to web design, online shopping, etc.). A significant positive impact on the data quality was the easy contact and instant feedback from the email respondents. Having just received a message from a researcher requesting further information or clarification on some points, an instant reply from the respondent saves the effort of explaining the issue again and introducing the problem. The same is valid vice versa when clarification and additional information is requested from the respondents.
Some limitations of online surveys
The major advantage of postal over online surveys is that respondents have physical addresses, whereas as yet not everyone has an electronic address. However, the random choice principle can still be kept in website surveys, where all visitors to a particular website have equal chances of being selected. This, however, represents a much reduced sampling frame than might be appropriate for studies not requiring computer literacy of the surveyed population. The demographic profile of Internet users in the USA in 1998 has started to mirror that of the general population with increased representation of female users (38.7%), whereas Europe seems to be less `gender-balanced', with 16.3% female respondents (Kehoe et al. 1998). However, for large-scale cross-country surveys the multimode approach (i.e. online and postal) compensates for the misrepresentation of the general population.
Across all groups of users, the most commonly experienced problem with web surveys stems from the time necessary to download pages, encountered by 64.8% of respondents (Kehoe et al. 1998). Participation in online surveys requires users to have specified email accounts in their browser's preferences menu, unless they use browser-enhanced email software (this problem has been commonly encountered by Pegasus and Eudora users). In those instances the CBI questionnaire was faxed back, which in terms of time took as long as the email would take. Problems may also arise with older browsers, which failed to properly display HTML questionnaires, with the appearance of the questionnaires in different browsers (Netscape, Internet Explorer).
Another significant limitation of online surveys stems from the technology required, which still suffers from being insufficiently useroriented. The need for a more user-centred approach was first addressed in the Human Computer Interaction (HCI) literature, where HCI was conceptualised as a social phenomenon (Norman 1986, 1993; Laurel 1995). Problems encountered in the CBI survey were related to proxy servers, which did not allow questionnaire-based emails through the system due to security precautions. However, this can be experienced only with institutional servers, which raises a significant concern regarding approaching online governmental and other organisations.
The major advantages of online surveys are:
- very low financial resource implications;
- short response time;
- researcher's control of the sample (and no involvement in the survey);
- data are directly loaded in the data analysis software, thus saving time and resources associated with the data entry process.
Some of these advantages are expected to increase with the growing use of the Internet, which should positively affect the response time and data quality of the online surveys. The response time can therefore be cut shorter with the more frequent use of the Internet as well as increased speed in downloading websites. Data quality is expected to improve with an increase in the number of Internet and email users, which in turn will improve the representation of the sample of the online respondents. The technical problems will diminish in future with the move towards more user-centred technology and increased software compatibility, such as survey software compatible with most of the browsers and email software.
An important issue in the method of data collection is the approach mode. Provided Internet users still do not mirror the entire population in some countries, a combination of online and postal techniques will positively affect the response rate, the representation of the respondents and data quality. However, the mode of contact depends on whether the 'right person's' contact details are available and on the sensitivity of the questionnaire.
The empirical research carried out on online surveys shows that email surveys generate better response rates than web-based surveys and they provide greater researcher control over the sample of respondents, avoiding multiple entries to the survey by the same person. On the other hand, questionnaires are better displayed, more interactive and easier to fill in when displayed in a browser window than in an email letter. Establishing contact through personalised email and providing the questionnaire in HTML format (or sending the website URL) combines the advantages of email and web-based surveys and optimises the use of online data collection.
However, the traditional mail surveys have advantages in guarding respondents' anonymity. Sensitive issues, which may prevent respondents from giving sincere answers, should be addressed via the post rather than online. Further research needs to be carried out on respondents' anonymity, which is easily disclosed in online surveys. Guaranteeing anonymity would have a twofold impact: it is expected to affect positively both response rate and data quality (i.e. respondents' candour).
There are examples of online surveys informing marketing theory (see Meuter et al. 2000), and such surveys are attractive precisely because of the advantages listed above. In practice, there are still many issues to address, both on the technical side and on the representativeness of the electronic sample. It is hoped that this summary will encourage further developments in international marketing regarding this relatively new mode of data collection.
Janet Ilieva,* Steve Baron^ and Nigel M. Healey*
*Manchester Metropolitan University and ^Monash University
University of Liverpool Email: S.Baron@mmu.ac.uk Steve Baron is Professor of Marketing at the University of Liverpool
Management School. His research interests include customer-to-customer interaction, retail theatre and service quality. He has published widely in journals including Journal of Service Research, European Journal of Marketing, and Journal of Business Research. He has also worked as Market Research Coordinator for P&O Shopping Centres Ltd.
Nigel M Healey
Manchester Metropolitan University Business School
Nigel M. Healey is Professor of Business Economics and Dean of Manchester Metropolitan University Business School. His research interests are in the area of international business and global interdependence, with a particular emphasis on the impact of economic and monetary integration within the EU. He has served as an economic policy advisor to, among others, governments in Russia and Belarus and been a visiting Professor at universities in the United States, China and eastern Europe.
Manchester Metropolitan University
Janet Ilieva is a Doctoral Researcher at the Manchester Metropolitan University Business School. Her current research area is the economic transformation of the transition economies in central and eastern Europe and the Commonwealth of Independent States, with particular reference to central banking and monetary policies.
1 Research published by Virtual Surveys Ltd and Pete Comley is by one and the same organisation.
Bachman, D., Elfrink, J. & Vazzana, G. (1996) Tracking the progress of email versus snail-mail. Marketing Research, 8, pp. 31-35.
Comley, P. (2000) Pop-up surveys: what works, what doesn't work and what will work in the future. ESOMAR Net Effects Internet Conference, Dublin, April (http://www.virtualsurveys.com/papers/popup-paper.htm).
Craig, C.S. & Douglas, S.P. (2001) Conducting international marketing research in the twenty-first century. International Marketing Review, 18(1), pp. 80-90. Dillman, D.A. (1978) Mail and Telephone Surveys: The Total Design Method. New York: Wiley-Interscience.
Dillman, D.A. (1991) The design and administration of mail surveys. Annual Review of Sociology, No. 17, pp. 225-249.
Dillman, D.A. & Tortora, R. (1998) Principles for constructing respondent-friendly web surveys and their influence on the response. American Statistical Association Meeting, Dallas.
Kehoe, C., Pitkow, J. & Rogers, J. (1998) GVU's ninth WWW user survey report. July (http://www.gvu.gatech.edu/user_surveys/survey-1998-04/).
Kent, R. & Lee, M. (1999) Using the Internet for market research: a study of private trading on the Internet. Journal of the Market Research Society, 41(4), pp.377-385.
Kiesler, S. & Sproull, L.S. (1986) Response effects in the electronic survey. Public Opinion Quarterly, 50(3), pp. 402-413.
King, WC. & Miles, E.W (1995) A quasi-experimental assessment of the effect of computerizing noncognitive paper-and-pencil measurements: a test measurement equivalence. Journal of Applied Psychology, 80, pp. 643-651.
Laurel, B. (1995) The Art of Human Computer Interface Design. Reading, MA: Addison-Wesley.
Martin, C.L. (1994) The impact of topic interest on mail survey response rate behaviour. Journal of the Market Research Society, 36(4), pp. 327-339.
Mehta, R. & Sivadas, E. (1995) Comparing response rates and response content in mail versus electronic surveys. Journal of Market Research Society, 37(4), pp. 429-440.
Meuter, M.L., Ostrom, A.L., Roundtree, R.I. & Bitner, M.J. (2000) Self-service technologies: understanding customer satisfaction with technology-based service encounters. Journal of Marketing, 64 (July), pp. 50-64.
Norman, D. (1986) Cognitive engineering. User-centred system design: new perspectives on human-computer interaction. In D. Norman & S.W. Draper (eds) User-Centred System Design: New Perspectives on Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
Norman, D. (1993) Cognition in the Head and in the World. Cognitive Science, 17(1), pp. 1-6.
Parker, L. (1992) Collecting data the email way. Training and Development, July, pp. 52-53.
Ranchhod, A. & Zhou, E (2001) Comparing respondents of email and mail surveys: understanding the implications of technology. Marketing Intelligence and Planning, 19(4), pp. 254-262.
Ray, N., Griggs, K. & Tabor, S. (2001) Web Based Survey Research Workshop, WDSI, April (http://telecomm.boisestate.edu/research/).
Runham, M. (1999) Presentation at the IIR Internet Research Conference, December.
Saunders, M., Lewis, P. & Thornhill, A. (1997) Research Methods for Business Students. London: Pitman Publishing, p. 247.
Schaefer, R. & Dillman, D.A. (1998) Development of a standard email methodology: results of an experiment. Public Opinion Quarterly, 62(3), pp. 378-397.
Schuldt, B.A. & Totten, J. (1994) Electronic mail vs. mail survey response rates. Marketing Research, 6, pp. 36-39.
Schuldt, B.A. & Totten, J.W (1999) email surveys: what we've learned thus far. Quirk's Marketing Research Review, July (www.quirks.com).
Sheehan, K.B. & McMillan, S.J. (1999) Response variation in email surveys: an exploration. Journal of Advertising Research, 39(4), p. 45.
Stanton, J.M. (1998) An empirical assessment of data collection using the Internet. Personnel Psychology, 51(3), pp. 709-726.
Tse, A.C.B. (1998) Comparing the response rate, response speed and response quality of two methods of sending questionnaires: email vs mail. Journal of the Market Research Society, 40(4), pp. 353-362.
Tse, A.C.B., Tse, K.C., Yin, C.H., Ting, C.B. & Hong, WC. (1995) Comparing two methods of sending out questionnaires: email vs mail. Journal of the Market Research Society, 37(4), pp. 441-446.
Virtual Surveys Ltd (2001a) email surveys in virtual surveys: web site research experts (http://www.virtualsurveys.com/services/email-web.htm).
Virtual Surveys Ltd (2001b) Virtual surveys: web site research experts (http://www.virtualsurveys.com/services/vsurveys.htm). Weible, R. & Wallace, J. (1998) The impact of internet on data collection. Market Research 10(3), pp. 19-23.
Wygant, S. & Lindorf, R. (1999) Surveying collegiate net surfers - web methodology or mythology? Quirks Marketing Research Review, July (www.quirks.com).
Wygant, S. & Feld, K. (2000) E-interviewers add human touch to web-based research. Quirk's Marketing Research Review (www.quirks.com).