The participants were 51 experienced internet users recruited by Sun (average level of Web experience was 24 months). Participants ranged in age from 22-69 (average age was 41). So as to focus on “normal users,” we excluded the following professions from the study: webmasters, web-site designers, graphic artists, user interface professionals, writers, editors, computer scientists, and computer programmers.
We checked for ramifications of age and Web experience in the dependent variables mentioned in the first five hypotheses, but we found only differences-none significant that is negligible. Had the websites inside our study been more challenging to navigate or had our tasks necessitated use of search engines or other Web infrastructure, we might have expected significant outcomes of both age and Web experience.
The experiment employed a 5-condition (promotional control, scannable, concise, objective, or combined) between-subjects design. Conditions were balanced for employment and gender status.
Called “Travel Nebraska,” the site contained information on Nebraska. We used a travel site because 1) inside our earlier qualitative studies, many internet users said travel is regarded as their interests, and 2) travel content lent itself into the writing that is different we wanted to study. We chose Nebraska to minimize the result of prior knowledge on our measures (in recruiting participants, we screened out people who had ever lived in, or even near, Nebraska).
Each type of the Travel Nebraska site consisted of seven pages, and all versions used the hypertext structure that is same. In order that participants would concentrate on text and not be distracted, we used modest hypertext (without any links away from site) and included only three photos plus one illustration. There clearly was no animation. Topics within the site were Nebraska’s history, geography, population, tourist attractions, and economy. The Appendix to this paper https://edubirdies.org/write-my-paper-for-me shows components of a sample page from each condition.
The control version of the site had a style that is promotional of (in other words., “marketese,”), which contained exaggeration, subjective claims, and boasting, rather than just simple facts. This style is characteristic of many pages on the internet today.
The concise version had a writing that is promotional, but its text was much shorter. Certain less-important information was cut, bringing the term count for every page to about 50 % compared to the corresponding page within the control version. A number of the writing in this version was in the inverted pyramid style. However, all information users had a need to perform the required tasks was presented into the same order in all versions associated with site.
The version that is scannable contained marketese, however it was written to encourage scanning, or skimming, associated with text for information of interest. This version used bulleted lists, boldface text to highlight keywords, photo captions, shorter sections of text, and more headings.
The objective version was stripped of marketese. It presented information without exaggeration, subjective claims, or boasting.
The combined version had shorter word count, was marked up for scannability, and was stripped of marketese.
Upon arrival in the usability lab, the participant signed a videotape consent form, then was told he or she would visit a website, perform tasks, and answer several questions.
The experimenter explained that he would observe from the room next door to the lab through the one-way mirror after making sure the participant knew how to use the browser. Through the study, the participant received both printed instructions from a paper packet and verbal instructions through the experimenter.
The participant began at the site’s homepage. The first two tasks were to find specific facts (located on separate pages in the site), without using a search tool or even the “Find” command. The participant then answered Part 1 of a brief questionnaire. Next was a judgment task (suggested by Spool et al. 1997) where the participant first had to find relevant information, then make a judgment about this. This task was followed by Part 2 for the questionnaire.
Next, the participant was instructed to pay ten full minutes learning whenever possible through the pages in the website, in preparation for a exam that is short. Finally, the participant was asked to draw written down the structure regarding the website, into the best of his / her recollection.
Each participant was told details about the study and received a gift after completing the study.
Task time was the wide range of seconds it took users to find answers when it comes to two search tasks and one judgment task.
The 2 search tasks were to answer: “about what date did Nebraska become a state?” and “Which Nebraska city is the 7th largest, in terms of population?” The questions when it comes to judgment task were: “In your opinion, which tourist attraction will be the right one to visit? Why do you imagine so?”
Task errors was a share score in line with the number of incorrect answers users gave into the two search tasks.
Memory comprised two measures through the exam: recall and recognition. Recognition memory was a portion score based on the quantity of correct answers minus the number of incorrect answers to 5 questions that are multiple-choice. For example, one of many questions read: “which will be Nebraska’s largest group that is ethnic? a) English b) Swedes c) Germans d) Irish.”
Recall memory was a portion score in line with the number of tourist attractions correctly recalled without the number incorrectly recalled. The question was: “Do you remember any true names of tourist attractions mentioned within the website? Please utilize the space below to list most of the ones you remember.”
Time and energy to recall site structure was the true wide range of seconds it took users to attract a sitemap.
A related measure, sitemap accuracy, was a portion score based on the amount of pages (maximum 7) and connections between pages (maximum 9) correctly identified, minus the amount of pages and connections incorrectly identified.
Subjective satisfaction was determined from participants’ answers to a paper-and-pencil questionnaire. Some questions inquired about specific areas of using the services of the website, as well as other questions asked for an evaluation of how good certain adjectives described the site (anchored by “Describes the site very poorly” to “Describes the website very well”). All questions used 10-point Likert scales.