What are the two essential components of every social psychological experiment?

Social psychologists are interested in the ways that other people affect thought, emotion, and behavior. To explore these concepts requires special research methods. Following a brief overview of traditional research designs, this module introduces how complex experimental designs, field experiments, naturalistic observation, experience sampling techniques, survey research, subtle and nonconscious techniques such as priming, and archival research and the use of big data may each be adapted to address social psychological questions. This module also discusses the importance of obtaining a representative sample along with some ethical considerations that social psychologists face.

Share

  • ×

    Close Dialog

    Research Methods in Social Psychology

    Share this module to:

Share this URL

  • ethics
  • Experiments
  • Hypothesis
  • Laboratory environments
  • Naturalistic observation
  • research
  • Research designs
  • Scientific Method

Learning Objectives

  • Describe the key features of basic and complex experimental designs.
  • Describe the key features of field experiments, naturalistic observation, and experience sampling techniques.
  • Describe survey research and explain the importance of obtaining a representative sample.
  • Describe the implicit association test and the use of priming.
  • Describe use of archival research techniques.
  • Explain five principles of ethical research that most concern social psychologists.
Introduction
Interested to improve your personal performance? Test your skills in the presence of other people to take advantage of social facilitation. [Image: Hans 905, //goo.gl/SiOSZh, CC BY-NC-SA 2.0, //goo.gl/iF4hmM]

Are you passionate about cycling? Norman Triplett certainly was. At the turn of last century he studied the lap times of cycling races and noticed a striking fact: riding in competitive races appeared to improve riders’ times by about 20-30 seconds every mile compared to when they rode the same courses alone. Triplett suspected that the riders’ enhanced performance could not be explained simply by the slipstream caused by other cyclists blocking the wind. To test his hunch, he designed what is widely described as the first experimental study in social psychology [published in 1898!]—in this case, having children reel in a length of fishing line as fast as they could. The children were tested alone, then again when paired with another child. The results? The children who performed the task in the presence of others out-reeled those that did so alone.

Although Triplett’s research fell short of contemporary standards of scientific rigor [e.g., he eyeballed the data instead of measuring performance precisely; Stroebe, 2012], we now know that this effect, referred to as “social facilitation,” is reliable—performance on simple or well-rehearsed tasks tends to be enhanced when we are in the presence of others [even when we are not competing against them]. To put it another way, the next time you think about showing off your pool-playing skills on a date, the odds are you’ll play better than when you practice by yourself. [If you haven’t practiced, maybe you should watch a movie instead!]

Research Methods in Social Psychology

One of the things Triplett’s early experiment illustrated is scientists’ reliance on systematic observation over opinion, or anecdotal evidence. The scientific method usually begins with observing the world around us [e.g., results of cycling competitions] and thinking of an interesting question [e.g., Why do cyclists perform better in groups?]. The next step involves generating a specific testable prediction, or hypothesis [e.g., performance on simple tasks is enhanced in the presence of others]. Next, scientists must operationalize the variables they are studying. This means they must figure out a way to define and measure abstract concepts. For example, the phrase “perform better” could mean different things in different situations; in Triplett’s experiment it referred to the amount of time [measured with a stopwatch] it took to wind a fishing reel. Similarly, “in the presence of others” in this case was operationalized as another child winding a fishing reel at the same time in the same room. Creating specific operational definitions like this allows scientists to precisely manipulate the independent variable, or “cause” [the presence of others], and to measure the dependent variable, or “effect” [performance]—in other words, to collect data. Clearly described operational definitions also help reveal possible limitations to studies [e.g., Triplett’s study did not investigate the impact of another child in the room who was not also winding a fishing reel] and help later researchers replicate them precisely. 

Laboratory Research

The Asch conformity experiment, which investigated how social pressure influences individual conformity, remains a classic example of a social psychology lab experiment. [Image: D-janous, //goo.gl/KwuGGM, CC BY-SA 4.0, //goo.gl/etijyD]

As you can see, social psychologists have always relied on carefully designed laboratory environments to run experiments where they can closely control situations and manipulate variables [see the NOBA module on Research Designs  for an overview of traditional methods]. However, in the decades since Triplett discovered social facilitation, a wide range of methods and techniques have been devised, uniquely suited to demystifying the mechanics of how we relate to and influence one another. This module provides an introduction to the use of complex laboratory experiments, field experiments, naturalistic observation, survey research, nonconscious techniques, and archival research, as well as more recent methods that harness the power of technology and large data sets, to study the broad range of topics that fall within the domain of social psychology. At the end of this module we will also consider some of the key ethical principles that govern research in this diverse field.

The use of complex experimental designs, with multiple independent and/or dependent variables, has grown increasingly popular because they permit researchers to study both the individual and joint effects of several factors on a range of related situations. Moreover, thanks to technological advancements and the growth of social neuroscience, an increasing number of researchers now integrate biological markers [e.g., hormones] or use neuroimaging techniques [e.g., fMRI] in their research designs to better understand the biological mechanisms that underlie social processes.

We can dissect the fascinating research of Dov Cohen and his colleagues [1996] on “culture of honor” to provide insights into complex lab studies. A culture of honor is one that emphasizes personal or family reputation. In a series of lab studies, the Cohen research team invited dozens of university students into the lab to see how they responded to aggression. Half were from the Southern United States [a culture of honor] and half were from the Northern United States [not a culture of honor; this type of setup constitutes a participant variable of two levels]. Region of origin was independent variable #1. Participants also provided a saliva sample immediately upon arriving at the lab; [they were given a cover story about how their blood sugar levels would be monitored over a series of tasks].

The participants completed a brief questionnaire and were then sent down a narrow corridor to drop it off on a table. En route, they encountered a confederate at an open file cabinet who pushed the drawer in to let them pass. When the participant returned a few seconds later, the confederate, who had re-opened the file drawer, slammed it shut and bumped into the participant with his shoulder, muttering “asshole” before walking away. In a manipulation of an independent variable—in this case, the insult—some of the participants were insulted publicly [in view of two other confederates pretending to be doing homework] while others were insulted privately [no one else was around]. In a third condition—the control group—participants experienced a modified procedure in which they were not insulted at all.

Although this is a fairly elaborate procedure on its face, what is particularly impressive is the number of dependent variables the researchers were able to measure. First, in the public insult condition, the two additional confederates [who observed the interaction, pretending to do homework] rated the participants’ emotional reaction [e.g., anger, amusement, etc.] to being bumped into and insulted. Second, upon returning to the lab, participants in all three conditions were told they would later undergo electric shocks as part of a stress test, and were asked how much of a shock they would be willing to receive [between 10 volts and 250 volts]. This decision was made in front of two confederates who had already chosen shock levels of 75 and 25 volts, presumably providing an opportunity for participants to publicly demonstrate their toughness. Third, across all conditions, the participants rated the likelihood of a variety of ambiguously provocative scenarios [e.g., one driver cutting another driver off] escalating into a fight or verbal argument. And fourth, in one of the studies, participants provided saliva samples, one right after returning to the lab, and a final one after completing the questionnaire with the ambiguous scenarios. Later, all three saliva samples were tested for levels of cortisol [a hormone associated with stress] and testosterone [a hormone associated with aggression].

The results showed that people from the Northern United States were far more likely to laugh off the incident [only 35% having anger ratings as high as or higher than amusement ratings], whereas the opposite was true for people from the South [85% of whom had anger ratings as high as or higher than amusement ratings]. Also, only those from the South experienced significant increases in cortisol and testosterone following the insult [with no difference between the public and private insult conditions]. Finally, no regional differences emerged in the interpretation of the ambiguous scenarios; however, the participants from the South were more likely to choose to receive a greater shock in the presence of the two confederates.

Figure 1

Field Research

Because social psychology is primarily focused on the social context—groups, families, cultures—researchers commonly leave the laboratory to collect data on life as it is actually lived. To do so, they use a variation of the laboratory experiment, called a field experiment. A field experiment is similar to a lab experiment except it uses real-world situations, such as people shopping at a grocery store. One of the major differences between field experiments and laboratory experiments is that the people in field experiments do not know they are participating in research, so—in theory—they will act more naturally. In a classic example from 1972, Alice Isen and Paula Levin wanted to explore the ways emotions affect helping behavior. To investigate this they observed the behavior of people at pay phones [I know! Pay phones!]. Half of the unsuspecting participants [determined by random assignment] found a dime planted by researchers [I know! A dime!] in the coin slot, while the other half did not. Presumably, finding a dime felt surprising and lucky and gave people a small jolt of happiness. Immediately after the unsuspecting participant left the phone booth, a confederate walked by and dropped a stack of papers. Almost 100% of those who found a dime helped to pick up the papers. And what about those who didn’t find a dime? Only 1 out 25 of them bothered to help.

In cases where it’s not practical or ethical to randomly assign participants to different experimental conditions, we can use naturalistic observation—unobtrusively watching people as they go about their lives. Consider, for example, a classic demonstration of the “basking in reflected glory” phenomenon: Robert Cialdini and his colleagues used naturalistic observation at seven universities to confirm that students are significantly more likely to wear clothing bearing the school name or logo on days following wins [vs. draws or losses] by the school’s varsity football team [Cialdini et al., 1976]. In another study, by Jenny Radesky and her colleagues [2014], 40 out of 55 observations of caregivers eating at fast food restaurants with children involved a caregiver using a mobile device. The researchers also noted that caregivers who were most absorbed in their device tended to ignore the children’s behavior, followed by scolding, issuing repeated instructions, or using physical responses, such as kicking the children’s feet or pushing away their hands. 

The ubiquitous smart phone provides social psychology researchers with an invaluable tool for working with study participants to gather data about such things as their daily activities, interactions, attitudes, and emotions. [Image: eltpics, //goo.gl/DWvoUK, CC BY-NC 2.0, //goo.gl/l8UUGY]

A group of techniques collectively referred to as experience sampling methods represent yet another way of conducting naturalistic observation, often by harnessing the power of technology. In some cases, participants are notified several times during the day by a pager, wristwatch, or a smartphone app to record data [e.g., by responding to a brief survey or scale on their smartphone, or in a diary]. For example, in a study by Reed Larson and his colleagues [1994], mothers and fathers carried pagers for one week and reported their emotional states when beeped at random times during their daily activities at work or at home. The results showed that mothers reported experiencing more positive emotional states when away from home [including at work], whereas fathers showed the reverse pattern. A more recently developed technique, known as the electronically activated recorder, or EAR, does not even require participants to stop what they are doing to record their thoughts or feelings; instead, a small portable audio recorder or smartphone app is used to automatically record brief snippets of participants’ conversations throughout the day for later coding and analysis. For a more in-depth description of the EAR technique and other experience-sampling methods, see the NOBA module on Conducting Psychology Research in the Real World.

Survey Research

In this diverse world, survey research offers itself as an invaluable tool for social psychologists to study individual and group differences in people’s feelings, attitudes, or behaviors. For example, the World Values Survey II was based on large representative samples of 19 countries and allowed researchers to determine that the relationship between income and subjective well-being was stronger in poorer countries [Diener & Oishi, 2000]. In other words, an increase in income has a much larger impact on your life satisfaction if you live in Nigeria than if you live in Canada. In another example, a nationally-representative survey in Germany with 16,000 respondents revealed that holding cynical beliefs is related to lower income [e.g., between 2003-2012 the income of the least cynical individuals increased by $300 per month, whereas the income of the most cynical individuals did not increase at all]. Furthermore, survey data collected from 41 countries revealed that this negative correlation between cynicism and income is especially strong in countries where people in general engage in more altruistic behavior and tend not to be very cynical [Stavrova & Ehlebracht, 2016].

Of course, obtaining large, cross-cultural, and representative samples has become far easier since the advent of the internet and the proliferation of web-based survey platforms—such as Qualtrics—and participant recruitment platforms—such as Amazon’s Mechanical Turk. And although some researchers harbor doubts about the representativeness of online samples, studies have shown that internet samples are in many ways more diverse and representative than samples recruited from human subject pools [e.g., with respect to gender; Gosling et al., 2004]. Online samples also compare favorably with traditional samples on attentiveness while completing the survey, reliability of data, and proportion of non-respondents [Paolacci et al., 2010].

Subtle/Nonconscious Research Methods

The methods we have considered thus far—field experiments, naturalistic observation, and surveys—work well when the thoughts, feelings, or behaviors being investigated are conscious and directly or indirectly observable. However, social psychologists often wish to measure or manipulate elements that are involuntary or nonconscious, such as when studying prejudicial attitudes people may be unaware of or embarrassed by. A good example of a technique that was developed to measure people’s nonconscious [and often ugly] attitudes is known as the implicit association test [IAT] [Greenwald et al., 1998]. This computer-based task requires participants to sort a series of stimuli [as rapidly and accurately as possible] into simple and combined categories while their reaction time is measured [in milliseconds]. For example, an IAT might begin with participants sorting the names of relatives [such as “Niece” or “Grandfather”] into the categories “Male” and “Female,” followed by a round of sorting the names of disciplines [such as “Chemistry” or “English”] into the categories “Arts” and “Science.” A third round might combine the earlier two by requiring participants to sort stimuli into either “Male or Science” or “Female and Arts” before the fourth round switches the combinations to “Female or Science” and “Male and Arts.” If across all of the trials a person is quicker at accurately sorting incoming stimuli into the compound category “Male or Science” than into “Female or Science,” the authors of the IAT suggest that the participant likely has a stronger association between males and science than between females and science. Incredibly, this specific gender-science IAT has been completed by more than half a million participants across 34 countries, about 70% of whom show an implicit stereotype associating science with males more than with females [Nosek et al., 2009]. What’s more, when the data are grouped by country, national differences in implicit stereotypes predict national differences in the achievement gap between boys and girls in science and math. Our automatic associations, apparently, carry serious societal consequences.

Another nonconscious technique, known as priming, is often used to subtly manipulate behavior by activating or making more accessible certain concepts or beliefs. Consider the fascinating example of terror management theory [TMT], whose authors believe that human beings are [unconsciously] terrified of their mortality [i.e., the fact that, some day, we will all die; Pyszczynski et al., 2003]. According to TMT, in order to cope with this unpleasant reality [and the possibility that our lives are ultimately essentially meaningless], we cling firmly to systems of cultural and religious beliefs that give our lives meaning and purpose. If this hypothesis is correct, one straightforward prediction would be that people should cling even more firmly to their cultural beliefs when they are subtly reminded of their own mortality. 

The research conducted by Rosenblatt and colleagues revealed that even seemingly sophisticated and level-headed thinkers like judges can be influenced by priming. [Image: Penn State, //goo.gl/mLrmWv, CC BY-NC-SA 2.0, //goo.gl/Toc0ZF]

In one of the earliest tests of this hypothesis, actual municipal court judges in Arizona were asked to set a bond for an alleged prostitute immediately after completing a brief questionnaire. For half of the judges the questionnaire ended with questions about their thoughts and feelings regarding the prospect of their own death. Incredibly, judges in the experimental group that were primed with thoughts about their mortality set a significantly higher bond than those in the control group [$455 vs. $50!]—presumably because they were especially motivated to defend their belief system in the face of a violation of the law [Rosenblatt et al., 1989]. Although the judges consciously completed the survey, what makes this a study of priming is that the second task [sentencing] was unrelated, so any influence of the survey on their later judgments would have been nonconscious. Similar results have been found in TMT studies in which participants were primed to think about death even more subtly, such as by having them complete questionnaires just before or after they passed a funeral home [Pyszczynski et al., 1996].

To verify that the subtle manipulation [e.g., questions about one’s death] has the intended effect [activating death-related thoughts], priming studies like these often include a manipulation check following the introduction of a prime. For example, right after being primed, participants in a TMT study might be given a word fragment task in which they have to complete words such as COFF_ _ or SK _ _ L. As you might imagine, participants in the mortality-primed experimental group typically complete these fragments as COFFIN and SKULL, whereas participants in the control group complete them as COFFEE and SKILL.

The use of priming to unwittingly influence behavior, known as social or behavioral priming [Ferguson & Mann, 2014], has been at the center of the recent “replication crisis” in Psychology [see the NOBA module on replication]. Whereas earlier studies showed, for example, that priming people to think about old age makes them walk slower [Bargh, Chen, & Burrows, 1996], that priming them to think about a university professor boosts performance on a trivia game [Dijksterhuis & van Knippenberg, 1998], and that reminding them of mating motives [e.g., sex] makes them more willing to engage in risky behavior [Greitemeyer, Kastenmüller, & Fischer, 2013], several recent efforts to replicate these findings have failed [e.g., Harris et al., 2013; Shanks et al., 2013]. Such failures to replicate findings highlight the need to ensure that both the original studies and replications are carefully designed, have adequate sample sizes, and that researchers pre-register their hypotheses and openly share their results—whether these support the initial hypothesis or not.

Archival Research

Researchers need not rely only on developing new data to gain insights into human behavior. Existing documentation from decades and even centuries past provide a wealth of information that is useful to social psychologists. [Image: Archivo FSP, //goo.gl/bUx6sJ, CC BY-SA 3.0, //goo.gl/g6ncfj]

Imagine that a researcher wants to investigate how the presence of passengers in a car affects drivers’ performance. She could ask research participants to respond to questions about their own driving habits. Alternately, she might be able to access police records of the number of speeding tickets issued by automatic camera devices, then count the number of solo drivers versus those with passengers. This would be an example of archival research. The examination of archives, statistics, and other records such as speeches, letters, or even tweets, provides yet another window into social psychology. Although this method is typically used as a type of correlational research design—due to the lack of control over the relevant variables—archival research shares the higher ecological validity of naturalistic observation. That is, the observations are conducted outside the laboratory and represent real world behaviors. Moreover, because the archives being examined can be collected at any time and from many sources, this technique is especially flexible and often involves less expenditure of time and other resources during data collection.

Social psychologists have used archival research to test a wide variety of hypotheses using real-world data. For example, analyses of major league baseball games played during the 1986, 1987, and 1988 seasons showed that baseball pitchers were more likely to hit batters with a pitch on hot days [Reifman et al., 1991]. Another study compared records of race-based lynching in the United States between 1882-1930 to the inflation-adjusted price of cotton during that time [a key indicator of the Deep South’s economic health], demonstrating a significant negative correlation between these variables. Simply put, there were significantly more lynchings when the price of cotton stayed flat, and fewer lynchings when the price of cotton rose [Beck & Tolnay, 1990; Hovland & Sears, 1940]. This suggests that race-based violence is associated with the health of the economy.

More recently, analyses of social media posts have provided social psychologists with extremely large sets of data [“big data”] to test creative hypotheses. In an example of research on attitudes about vaccinations, Mitra and her colleagues [2016] collected over 3 million tweets sent by more than 32 thousand users over four years. Interestingly, they found that those who held [and tweeted] anti-vaccination attitudes were also more likely to tweet about their mistrust of government and beliefs in government conspiracies. Similarly, Eichstaedt and his colleagues [2015] used the language of 826 million tweets to predict community-level mortality rates from heart disease. That’s right: more anger-related words and fewer positive-emotion words in tweets predicted higher rates of heart disease.

In a more controversial example, researchers at Facebook attempted to test whether emotional contagion—the transfer of emotional states from one person to another—would occur if Facebook manipulated the content that showed up in its users’ News Feed [Kramer et al., 2014]. And it did. When friends’ posts with positive expressions were concealed, users wrote slightly fewer positive posts [e.g., “Loving my new phone!”]. Conversely, when posts with negative expressions were hidden, users wrote slightly fewer negative posts [e.g., “Got to go to work. Ugh.”]. This suggests that people’s positivity or negativity can impact their social circles.

The controversial part of this study—which included 689,003 Facebook users and involved the analysis of over 3 million posts made over just one week—was the fact that Facebook did not explicitly request permission from users to participate. Instead, Facebook relied on the fine print in their data-use policy. And, although academic researchers who collaborated with Facebook on this study applied for ethical approval from their institutional review board [IRB], they apparently only did so after data collection was complete, raising further questions about the ethicality of the study and highlighting concerns about the ability of large, profit-driven corporations to subtly manipulate people’s social lives and choices.

Research Issues in Social Psychology

The Question of Representativeness

How confident can we be that the results of social psychology studies generalize to the wider population if study participants are largely of the WEIRD variety? [Image: Mike Miley, //goo.gl/NtvlU8, CC BY-SA 2.0, //goo.gl/eH69he]

Along with our counterparts in the other areas of psychology, social psychologists have been guilty of largely recruiting samples of convenience from the thin slice of humanity—students—found at universities and colleges [Sears, 1986]. This presents a problem when trying to assess the social mechanics of the public at large. Aside from being an overrepresentation of young, middle-class Caucasians, college students may also be more compliant and more susceptible to attitude change, have less stable personality traits and interpersonal relationships, and possess stronger cognitive skills than samples reflecting a wider range of age and experience [Peterson & Merunka, 2014; Visser, Krosnick, & Lavrakas, 2000]. Put simply, these traditional samples [college students] may not be sufficiently representative of the broader population. Furthermore, considering that 96% of participants in psychology studies come from western, educated, industrialized, rich, and democratic countries [so-called WEIRD cultures; Henrich, Heine, & Norenzayan, 2010], and that the majority of these are also psychology students, the question of non-representativeness becomes even more serious.

Of course, when studying a basic cognitive process [like working memory capacity] or an aspect of social behavior that appears to be fairly universal [e.g., even cockroaches exhibit social facilitation!], a non-representative sample may not be a big deal. However, over time research has repeatedly demonstrated the important role that individual differences [e.g., personality traits, cognitive abilities, etc.] and culture [e.g., individualism vs. collectivism] play in shaping social behavior. For instance, even if we only consider a tiny sample of research on aggression, we know that narcissists are more likely to respond to criticism with aggression [Bushman & Baumeister, 1998]; conservatives, who have a low tolerance for uncertainty, are more likely to prefer aggressive actions against those considered to be “outsiders” [de Zavala et al., 2010]; countries where men hold the bulk of power in society have higher rates of physical aggression directed against female partners [Archer, 2006]; and males from the southern part of the United States are more likely to react with aggression following an insult [Cohen et al., 1996].

Ethics in Social Psychological Research

The Stanford Prison Study has been criticized for putting participants in dangerous and psychologically damaging situations. [Image: Teodorvasic97, //goo.gl/0LJReB, CC BY-SA 4.0, //goo.gl/etijyD]

For better or worse [but probably for worse], when we think about the most unethical studies in psychology, we think about social psychology. Imagine, for example, encouraging people to deliver what they believe to be a dangerous electric shock to a stranger [with bloodcurdling screams for added effect!]. This is considered a “classic” study in social psychology. Or, how about having students play the role of prison guards, deliberately and sadistically abusing other students in the role of prison inmates. Yep, social psychology too. Of course, both Stanley Milgram’s [1963] experiments on obedience to authority and the Stanford prison study [Haney et al., 1973] would be considered unethical by today’s standards, which have progressed with our understanding of the field. Today, we follow a series of guidelines and receive prior approval from our institutional research boards before beginning such experiments. Among the most important principles are the following:

  1. Informed consent: In general, people should know when they are involved in research, and understand what will happen to them during the study [at least in general terms that do not give away the hypothesis]. They are then given the choice to participate, along with the freedom to withdraw from the study at any time. This is precisely why the Facebook emotional contagion study discussed earlier is considered ethically questionable. Still, it’s important to note that certain kinds of methods—such as naturalistic observation in public spaces, or archival research based on public records—do not require obtaining informed consent.
  2. Privacy: Although it is permissible to observe people’s actions in public—even without them knowing—researchers cannot violate their privacy by observing them in restrooms or other private spaces without their knowledge and consent. Researchers also may not identify individual participants in their research reports [we typically report only group means and other statistics]. With online data collection becoming increasingly popular, researchers also have to be mindful that they follow local data privacy laws, collect only the data that they really need [e.g., avoiding including unnecessary questions in surveys], strictly restrict access to the raw data, and have a plan in place to securely destroy the data after it is no longer needed.
  3. Risks and Benefits: People who participate in psychological studies should be exposed to risk only if they fully understand the risks and only if the likely benefits clearly outweigh those risks. The Stanford prison study is a notorious example of a failure to meet this obligation. It was planned to run for two weeks but had to be shut down after only six days because of the abuse suffered by the “prison inmates.” But even less extreme cases, such as researchers wishing to investigate implicit prejudice using the IAT, need to be considerate of the consequences of providing feedback to participants about their nonconscious biases. Similarly, any manipulations that could potentially provoke serious emotional reactions [e.g., the culture of honor study described above] or relatively permanent changes in people’s beliefs or behaviors [e.g., attitudes towards recycling] need to be carefully reviewed by the IRB.
  4. Deception: Social psychologists sometimes need to deceive participants [e.g., using a cover story] to avoid demand characteristics by hiding the true nature of the study. This is typically done to prevent participants from modifying their behavior in unnatural ways, especially in laboratory or field experiments. For example, when Milgram recruited participants for his experiments on obedience to authority, he described it as being a study of the effects of punishment on memory! Deception is typically only permitted [a] when the benefits of the study outweigh the risks, [b] participants are not reasonably expected to be harmed, [c] the research question cannot be answered without the use of deception, and [d] participants are informed about the deception as soon as possible, usually through debriefing.
  5. Debriefing: This is the process of informing research participants as soon as possible of the purpose of the study, revealing any deceptions, and correcting any misconceptions they might have as a result of participating. Debriefing also involves minimizing harm that might have occurred. For example, an experiment examining the effects of sad moods on charitable behavior might involve inducing a sad mood in participants by having them think sad thoughts, watch a sad video, or listen to sad music. Debriefing would therefore be the time to return participants’ moods to normal by having them think happy thoughts, watch a happy video, or listen to happy music.
Conclusion

As an immensely social species, we affect and influence each other in many ways, particularly through our interactions and cultural expectations, both conscious and nonconscious. The study of social psychology examines much of the business of our everyday lives, including our thoughts, feelings, and behaviors we are unaware or ashamed of. The desire to carefully and precisely study these topics, together with advances in technology, has led to the development of many creative techniques that allow researchers to explore the mechanics of how we relate to one another. Consider this your invitation to join the investigation.

What are the key components of social psychology?

Social psychology focuses on three main areas: social thinking, social influence, and social behavior. Each of these overlapping areas of study is displayed in Figure 1.1. The circles overlap because, in our everyday lives, these three forces blend together as they influence us.

What are the two key research areas in social psychology?

Social psychology focuses on a broad range of topics. Two key research areas in social psychology are: social cognition and attitudes.

What is social psychological experiment?

A social experiment is a type of psychological or sociological research for testing people's reactions to certain situations or events. The experiment depends on a particular social approach where the main source of information is the point of view and knowledge of the participants of the experiment.

What are the 3 classic social psychology experiments?

Learn more about some of the most famous experiments in the history of social psychology..
The Asch Conformity Experiments. Jay Lopez. ... .
The Bobo Doll Experiment. Jay Lopez. ... .
The Stanford Prison Experiment. Darrin Klimek / Getty Images. ... .
The Milgram Experiments. Jay Lopez..

Chủ Đề