Please note! This essay has been submitted by a student.
Research is one of the most important tools with which human kind are able to learn about, understand, interpret, explain and then subsequently, use to transform reality. Its development from the different scientific branches is essential in searching for solutions to problems society faces and in order to acquire new knowledge explaining and orientating its transformation. The scientific methods and investigations also provide the professional – of its respective field – with perspective, allowing for the critical analysis of information derived from findings and the knowledge from which its actions are based (Álvarez, 2011).
This essay will focus on two different methodologies involved in scientific investigation – qualitative and quantitative – and will cover the specific aspects that makes each one unique and their individual strengths and weaknesses. Such aspects include: methods of data collection, types of data being collected, sample sizes, the efficiency with which data is collected and whether said data can be compared to a larger sample/general population. Both qualitative and quantitative research have their strengths and weaknesses in relation to how they are structured and how their methods are used to obtain the findings and final results in the topic of the research.
This essay will examine these by comparing two papers – on whiplash associated disorders – each utilising one of said distinct methods of study: “If I can get over that, I can get over anything”—understanding how individuals with acute whiplash disorders form beliefs about pain and recovery: a qualitative study (Williamson, 2015) (qualitative) and Thoracic dysfunction in whiplash associated disorders: A systematic review (Heneghan, 2018) (quantitative).
Data collection methods: In quantitative research, the collection, handling and analysis of numerical data is the basis of proving/disproving a stated hypothesis. In order for said hypothesis to be challenged and the findings applied in relation to a general population – so making the investigation useful – large amounts of consistent, reliable and repeatable data have to be found. This was done very effectively by Heneghan in his paper, which analysed and processed data from over 50,000 participants. Such a feat has two important requirements, it needs to be done as quickly as possible – so to allow for the maximal amount of time for the processing of found data – and as cost effectively as possible – so to reduce overall research costs and apply any potential budget to areas of the study that may require it more.
To add to this, in order for results to be applicable to a general population, the involved participants need to come from many different places depending on what scale the results are being compared to – a city, a country, or on a global scale. Heneghan also did this effectively, as it involved more than 30 studies from many different countries around the world. Rapid data collection on such a large scale is made possible in recent years mainly by the development of mass communication – the telephone and the internet – and the mass use of it becoming common in younger generations, who prefer it over pencil and paper (Flanagan, 2015).
With the press of a button, a survey can be delivered to any number of required participants, with potential for it to be over a global scale. This is both cost and time effective as it cuts out the need to go out and ask people to fill out a survey or using the postal system to distribute required forms for collecting data – both of which take time, effort and a team of individuals (who need to be paid to manage and carry out said tasks). Time saved by using such a fast method of distribution can be used to study and process data more effectively and thoroughly and money saved by using this (largely free) system, can be put towards paying a larger team of qualified individuals and analysts to process gathered information faster and more thoroughly – in order to eventually have the research published sooner. The downside to this is that, in order for this to happen, the team needs to be very big and must be maintained (Ruthberg, 2018) – paid – for as long as it takes to complete the task (which can be a long time), this can get expensive for those leading/sponsoring the investigation, especially when hired analysts may be particular expensive due to their specialisation (Leedy, 2001).
This point is made even more important by the fact that processing such a big amount of data takes a long time, so will increase total costs anyway. In qualitative research, data being obtained is non-numerical and aims more to investigate a theory or provide the basis of a potential future hypothesis (Babbie, 2014). It focuses on the manner of humans and the nature of how they think instead of the quantity of individuals who tick the same boxes on a survey. The approach to gathering data is therefore different in this research method. As data involves the nature of how individuals answer a question posed to them (and not just the answer they give), participants are often interviewed face-to-face instead of by filling out a piece of paper, allowing for reactions, emotions and other more subjective details to be accounted for when questions are posed.
This is a time-consuming process and therefore limits the total number of participants to a smaller number, which is evident in Williamson’s paper, where the sample size was a total of 20 individuals. What may be immediately assumed is that, because there is a much smaller sample size, qualitative research must be more time and cost-effective than quantitative research. However, this is more often not the case due to several key factors. As the data being collected is not numerical, but interpreted from an interview, if there are too many interviewers involved, there may be more interviews done in a short time period, but the data gained from them will be interpreted differently due to the individual subjective-ness of each interviewer.
There is therefore a risk of data skew and completely unreliable results. If there is only one interviewer conducting the investigations, all interpretations of how each subject is answering posed questions will be consistent, but the trade-off is that collection, processing and interpretation of collected data will take a much longer time than in a quantitative investigation, the interviewer collecting the data (who is probably specialised in this field) will have to be paid for doing more work and over a much longer period. Further to this, many subjects may ask for a payment for the use of their time (Runciman, 1993), which can further increase costs. Recorded interviews also have to be played back or read through, which is a time-consuming process that also involves the processing of a lot of collected information, but because it is in a conversational format and isn’t empirical, it cannot just be put into a table, sorted and then graphed to be displayed in a manner where it can be interpreted in an easier fashion. The writing up and then interpretation of these interviews takes time and in-depth analysis.
Data type and interpretation The two methods of data collection are also different in what kind of information is being extracted from the investigation and furthermore, how this information can be applied. This means that each research method has its benefits and flaws by comparison to the other. The numerical aspect of quantitative research is beneficial to its general purpose – to prove or disprove a hypothesis and apply the findings to a general population. Although it often takes skilled or specialised individuals to interpret large amounts of data there is rarely any room for interpretive bias directly due to the individual consolidating their allocated data set. This is due to the fact that quantitative investigation yields data whose interpretation is unconditional and is displayed numerically. It can be plotted in a graph and correlations and conclusions can quickly be derived once all the data has been processed effectively and – in an ideal situation – without any manipulation.
Once all of the data is in a format where it can be compared, investigators can start to reject or accept their hypothesis and apply it (ideally) to a larger population, making inferences and asking further questions about findings to be studied in future investigations. With effectively presented data, researchers can also find trends, patterns and correlations within the data set, allowing for even more conclusions and assumptions to be made based on the general (or specific) demographic being studied. This all means that this method of research is excellent for proving whether or not a proposed theory is right or not, but is not very good for understanding why.
As quantitative surveys generally use ‘yes’ or ‘no’ as answers to the proposed questions, data can be compared to other data, however the specific details that also answer the question are lacking and so conclusions drawn are largely generalised. This is where qualitative research has its advantages. As the data is gathered through interview or observation, not only are different answers to proposed questions able to be recorded, but the manner in which they are answered subjectively from participant to participant is also able to be analysed, allowing for more of a holistic and specific approach to not just what is being answered, but how the individuals are answering and the nature in which humans interpret questions can also be observed.
As interviewers can delve deeper into both and individuals’ answer and their particular personal feelings on the subject surrounding said answer, there is potential for more information to be extracted in order to support the hypotheses on a broader level. However, due to information being gathered being interpreted instead of absolute, there is a greater risk of data skew/bias due to how the interviewer understood the answer given. To add to this, if the study is covering a demographic of many nationalities, there is also the risk of interpretive misunderstanding due to the interview not being conducted in the subject’s native language (Lee, 2014), or alternatively – in the instant when a translator is being used – some meaning or extra detail may be lost in translation due to the use of a middle-man in conversation.
Bias in other forms: Bias does not just come in the form of misinterpretation. If a large corporation is funding an investigation, it may pay the researchers to produce results that are in favour of the company (Lundh, 2017). This can be in the form of what is known as ‘data manipulation’ and is not as noticeable in quantitative research as it is in qualitative research, due to the very big difference in sample sizes (Lundh, 2017). Data may not necessarily be directly manipulated, however, some data sets that do not favour the proposed hypotheses may be excluded from the final results, so skewing correlations and changing conclusions. This is poor scientific practice.
In conclusion, it may be found that each research method has its advantages and disadvantages. Where quantitative research can observe trends and correlation in very big data sets – allowing for generalization to be applied and hypotheses to be firmly proved or disproved – where it lacks is in the application and consideration of human nature in answering a question. Qualitative research methods may not be able to provide such big data sets to draw broad conclusions relative to larger populations, but it is able to focus on how else questions posed are being answered and why they are being answered in such a way. This focuses more on how and why subjects answer the way they do, which allows for further investigation on a larger scale in the future – due to new questions being asked based on findings and new hypotheses being proposed as a result of these questions. One may say that one research method is a necessary stepping stone into starting an investigation utilising the other method. In an ideal situation, it could be argued that both methods of research should be used together instead of individually, so providing both the applicable raw data and the finer details that numerical investigation wouldn’t normally pick up – further backing up what the numbers show.