More than Words

Analysis of user generated content for identification of latent properties of service quality

 

Shortfacts

Project duration:  10/2013 - 12/2014
Project management:  Prof. Dr. Michaela Geierhos & Prof. Nancy Wünderlich
Funding program:  Forschungspreis 2013 der Universität Paderborn
Amount of Funding:  62,000 EUR


Motivation

Internet users have more and more opportunities to submit reviews on a variety of products (e.g. Amazon reviews), services (e.g. MyHammer, jameda) and experiences (e.g. TripAdvisor). Users visit review platforms to actively share their experiences with services such as hotel vacations, visits to medical facilities or even mail order experiences with other interested customers. For many consumers, these reviews are seen as a helpful source of information when weighing up a personal purchase decision. However, the increasing flood of ratings and reviews on rating portals (e.g. ShopVote) and social media (e.g. qype, flickr) also presents internet users with the challenge of selecting the large number of rating comments and portals in terms of their relevance.
These evaluation comments often consist of free texts (so-called user-generated content), which can differ significantly in terms of structure and content focus. In particular, if these free texts form the only basis for evaluation, an interpretation hurdle becomes apparent on the part of the user. If quantifiable user ratings are available in the form of scales, these are often not always consistent with the freely formulated evaluation comments. While there are various software solutions that enable companies to automatically analyze the opinions of their customers (e.g. TrustYou) and thus track trends, Internet users themselves do not have a tool at hand that helps them to assess the service quality of a company at first glance from millions of reviews.


Innovation

A new interdisciplinary correlative method, ...

  • which uses computational linguistic methods for semantic content analysis of evaluation texts in Web 2.0 to draw conclusions about domain-specific customer requirements for services and user-specific deviations in polarity;

  • which places empirically determined dimensions of service quality in relation to qualitatively and quantitatively measurable customer satisfaction instead of domain-independent SERVQUAL categories;

  • which for the first time enables an automatic comparison of qualitative and quantitative service evaluations by taking into account the user-typical evaluation intervals for polarity scales.


Implementation

The aim of the research project is to implement the above scenario using methods from computational linguistics and service management. The following research questions are addressed based on the research gaps in both disciplines:

  • To what extent do writers of evaluation comments use features that correspond to the classical evaluation dimensions of service quality to describe service experiences? To what extent are other evaluation criteria and dimensions used?
  • To what extent do the users' evaluation comments vary? Which evaluation behavior is recognizable? To what extent do differences exist for different service areas?
  • If quantitative scales are used: Does the addition of free text ratings help to establish scale equivalence? To what extent do the qualitative evaluation comments agree with quantitative overall assessments – under what conditions do they differ?