Quantitative marketing research

Source: Wikipedia, the free encyclopedia.

Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

As a

survey. Marketers use the information to obtain and understand the needs of individuals in the marketplace, and to create strategies and marketing plans
.

Data collection

The most popular quantitative marketing research method is a survey. Surveys typically contain a combination of structured questions and open questions. Survey participants respond to the same set of questions, which allows the researcher to easily compare responses by different types of respondent. Surveys can be distributed in one of four ways: telephone, mail, in-person and online (whether by mobile or desktop).

Another quantitative research method is to conduct experiments into how individuals respond to different situations or scenarios. One example of this is A/B testing of a piece of marketing communications, such as a website landing page. Website visitors are shown different versions of the landing page, and marketers track which is more effective.[1]

Differences between consumer and B2B quantitative research

Quantitative research is used in both consumer research and business-to-business (B2B) research. However, there are differences in how consumer researchers and B2B researchers distribute their surveys.

Generally, surveys are distributed online more than in-person, by telephone or by mail.[2] However, in B2B research, online research is not always possible, often because it is difficult to get hold of certain business decision-makers via email. As a result, B2B researchers still often conduct surveys via telephone.[3]

Typical general procedure

Simply put, there are five major and important steps involved in the research process:

  1. Defining the problem.
  2. Research design
    .
  3. Data collection.
  4. Data analysis.
  5. Report writing
    & presentation.

A brief discussion on these steps is:

  1. Problem audit and problem definition - What is the problem? What are the various aspects of the problem? What information is needed?
  2. Conceptualization and operationalization - How exactly do we define the concepts involved? How do we translate these concepts into observable and measurable behaviours?
  3. Hypothesis specification - What claim(s) do we want to test?
  4. Research design specification - What type of methodology to use? - examples: questionnaire, survey
  5. Question specification - What questions to ask? In what order?
  6. Scale specification - How will preferences be rated?
  7. :- (Convenience Sampling, Judgement Sampling, Purposive Sampling, Quota Sampling, Snowball Sampling, etc. )
  8. Data collection - Use mail, telephone, internet, mall intercepts
  9. Codification and re-specification - Make adjustments to the raw data so it is compatible with statistical techniques and with the objectives of the research - examples: assigning numbers, consistency checks, substitutions, deletions, weighting, dummy variables, scale transformations, scale standardization
  10. Statistical analysis - Perform various descriptive and inferential techniques (see below) on the raw data. Make inferences from the sample to the whole population. Test the results for statistical significance.
  11. Interpret and integrate findings - What do the results mean? What conclusions can be drawn? How do these findings relate to similar research?
  12. Write the research report - Report usually has headings such as: 1) executive summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts and diagrams. Present the report to the client in a 10-minute presentation. Be prepared for questions.

The design step may involve a pilot study in order to discover any hidden issues. The codification and analysis steps are typically performed by computer, using

statistical software
. The data collection steps, can in some instances be automated, but often require significant manpower to undertake. Interpretation is a skill mastered only by experience.

Statistical analysis

The data acquired for quantitative marketing research can be analysed by almost any of the range of techniques of

statistical surveys
. In any instance, an appropriate type of statistical analysis should take account of the various types of error that may arise, as outlined below.

Reliability and validity

Research should be tested for

validity
.

Generalizability is the ability to make inferences from a sample to the population.

Reliability is the extent to which a measure will produce consistent results.

Validity asks whether the research measured what it intended to.

  • Content validation (also called face validity) checks how well the content of the research are related to the variables to be studied; it seeks to answer whether the research questions are representative of the variables being researched. It is a demonstration that the items of a test are drawn from the domain being measured.
  • Criterion validation checks how meaningful the research criteria are relative to other possible criteria. When the criterion is collected later the goal is to establish predictive validity.
  • Construct validation checks what underlying construct is being measured. There are three variants of construct validity: convergent validity (how well the research relates to other measures of the same construct), discriminant validity (how poorly the research relates to measures of opposing constructs), and nomological validity (how well the research relates to other variables as required by theory).
  • Internal validation, used primarily in experimental research designs, checks the relation between the dependent and independent variables (i.e. Did the experimental manipulation of the independent variable actually cause the observed results?)
  • External validation checks whether the experimental results can be generalized.

Validity implies reliability: A valid measure must be reliable. Reliability does not necessarily imply validity, however: A reliable measure does not imply that it is valid.

Types of errors

Random sampling errors:

  • sample too small
  • sample not representative
  • inappropriate sampling method used
  • random errors

Research design errors:

  • bias introduced
  • measurement error
  • data analysis error
  • sampling frame error
  • population definition error
  • scaling error
  • question construction error

Interviewer errors:

  • recording errors
  • cheating errors
  • questioning errors
  • respondent selection error

Respondent errors:

  • non-response error
  • inability error
  • falsification error

Hypothesis errors:

  • type I error (also called alpha error)
    • the study results lead to the rejection of the null hypothesis even though it is actually true
  • type II error (also called beta error)
    • the study results lead to the acceptance (non-rejection) of the null hypothesis even though it is actually false

See also

References

  1. ^ Principles of Marketing Module 6: Marketing Information and Research | Primary Marketing Research Methods (Spring 2016)
  2. ^ 2018 Q3-Q4 GRIT Re-port, GreenBook, 2018, p. 34
  3. ^ Wells, Chris (July 12, 2020). "How to Conduct B2B Quan-titative Research". Adience. Retrieved 23 July 2020.

Bibliography