Knowledge- uses well-defined and well-organized body of knowledge that is
intellectual and describes phenomena of concern Mission- enlarges
body of knowledge and subsequently imposes on its members the lifelong
obligation to remain current Education- entrusts
the education of its practitioners to institutions of higher education Social Construct-
applies body of knowledge in services that are vital to human welfare Autonomy- functions
autonomously in formulation of professional policy and in monitoring of its
practice and practitioners Accountability-
guided by a code of ethics that regulates the relationship between
professional and client Culture- Distinguished
by presence of specific culture, norms, and values that are common among its
members; attracts individuals of intellectual and personal qualities who
exalt service above personal gain and who recognize their occupation as their
life work Compensation-strives
to compensate its practitioners by providing freedom of action, opportunity
for continuous professional growth, and economic security
Foods like yogurt, miso, kimchi, sauerkraut, kombucha, and tempeh are rich in “good” bacteria called probiotics. They may help ulcers by fighting an H. pylori infection or by helping treatments work better.Oct 12, 2018
Conclusions: Omeprazole is a well studied and well tolerated agent effective in adults or children as a component in regimens aimed at eradicating H. pylori infections or as monotherapy in the treatment and prophylaxis of GORD with or without oesophagitis or NSAID-induced gastrointestinal damage.
Tests and procedures used to determine whether you have an H. pylori infection include:
Blood test. Analysis of a blood sample may reveal evidence of an active or previous H. pylori infection in your body. However, breath and stool tests are better at detecting active H. pylori infections than is a blood test.
Breath test. During a breath test, you swallow a pill, liquid or pudding that contains tagged carbon molecules. If you have an H. pylori infection, carbon is released when the solution is broken down in your stomach.
Your body absorbs the carbon and expels it when you exhale. You
exhale into a bag, and your doctor uses a special device to detect the
carbon molecules.
Acid-suppressing drugs known as proton pump inhibitors (PPIs),
bismuth subsalicylate (Pepto-Bismol) and antibiotics can interfere with
the accuracy of this test. Your doctor will ask you to stop taking those
medications for a week or two before you have the test. This test is
available for adults and children.
Stool test. A laboratory test called a stool antigen test looks for foreign proteins (antigens) associated with H. pylori infection in your stool. As with the breath test, PPIs
and bismuth subsalicylate can affect the results of this test, so your
doctor will ask you to stop taking them for two weeks before the test.
Scope test. You'll be sedated for this test,
known as an upper endoscopy exam. During the exam, your doctor threads a
long flexible tube equipped with a tiny camera (endoscope) down your
throat and esophagus and into your stomach and duodenum. This instrument
allows your doctor to view any irregularities in your upper digestive
tract and remove tissue samples (biopsy).
These samples are analyzed for H. pylori infection. This test isn't generally recommended solely to diagnose an H. pylori infection because it's more invasive than a breath or stool test, but it may be used to diagnose H. pylori ulcers or if it's needed to rule out other digestive conditions.
H. pylori
infections are usually treated with at least two different antibiotics
at once, to help prevent the bacteria from developing a resistance to
one particular antibiotic. Your doctor also will prescribe or recommend
an acid-suppressing drug, to help your stomach lining heal.
Drugs that can suppress acid include:
Proton pump inhibitors (PPIs). These drugs stop acid from being produced in the stomach. Some examples of PPIs are omeprazole (Prilosec), esomeprazole (Nexium), lansoprazole (Prevacid) and pantoprazole (Protonix).
Histamine (H-2) blockers. These medications block a substance called histamine, which triggers acid production. One example is cimetidine (Tagamet HB).
Bismuth subsalicylate. More commonly known by the brand name Pepto-Bismol, this drug works by coating the ulcer and protecting it from stomach acid.
Your doctor may recommend that you undergo testing for H. pylori
at least four weeks after your treatment. If the tests show the
treatment was unsuccessful, you may undergo another round of treatment
with a different combination of antibiotic medications.
See your primary care doctor if you have signs or symptoms that indicate a complication of H. pylori infection. Your doctor may test and treat you for H. pylori infection, or refer you to a specialist who treats diseases of the digestive system (gastroenterologist).
Because appointments can be brief, and because there's often a lot to
discuss, it's a good idea to be well prepared for your appointment.
Here's some information to help you get ready for your appointment, and
what to expect from your doctor.
What you can do
At the time you make the
appointment, be sure to ask if there's anything you need to do in
advance, such as restrict your diet. Before your appointment, you might
want to write a list that answers the following questions:
When did your symptoms begin?
Does anything make them better or worse?
Have your parents or siblings ever experienced similar problems?
What medications or supplements do you take regularly?
Your time with your doctor is limited. Preparing a list of questions
to ask may help you make the most of your time together. For H. pylori infection, some basic questions to ask your doctor include:
How did H. pylori infection cause the complications I'm experiencing?
Can H. pylori cause other complications?
What kinds of tests do I need?
Do these tests require any special preparation?
What treatments are available?
How will I know if the treatment worked?
As you talk, ask additional questions that occur to you during your appointment.
What to expect from your doctor
Your doctor is
likely to ask you a number of questions. Being ready to answer them may
allow more time to cover other points you want to address. Your doctor
may ask:
Have your symptoms been continuous or occasional?
How severe are your symptoms?
Do you take any over-the-counter pain relievers such as aspirin,
ibuprofen (Advil, Motrin IB, others) or naproxen sodium (Aleve)?
negligent behavior often used terms such as failure to, lack of, incomplete, ineffective, and improper.
The categories of negligence are: failure to follow standards of care,
failure to use equipment in a responsible manner, failure to
communicate, failure to document, failure to assess and monitor, and
failure to act as a patient advocate
2 The nurse failed to seek such a review and ordered the Orthoblock for use in the patient's ACF.
NURSES
HAVE AN obligation to communicate changes in a patient's condition to
the healthcare provider in a timely fashion. When a patient's condition
deteriorates, a nurse's failure to act violates this fundamental
responsibility, undermines patient safety, and has potentially severe
consequences for the patient and nurse alike. The following court case
summary and discussion illustrate the peril of failing to act.
Facts of the case
Mary Long*
was admitted to the hospital with cholelithiasis and common bile duct
dilation and evidence of a bowel obstruction, leading to a diagnosis of
acute cholecystitis. After undergoing a procedure to remove the
gallstones, a nasogastric (NG) tube was inserted as prescribed by the
physician. No further orders were written regarding what actions to take
if the NG tube was dislodged or removed.1
Hospital
records indicated that Ms. Long removed the NG tube within 2 days after
insertion and refused to let the nurses reinsert it. The nurses did not
replace the NG tube or inform the prescribing physician that the tube
had been removed and not replaced. Ms. Long subsequently underwent
surgery for a bowel obstruction. At the time of discharge, she had been
diagnosed with 12 different medical conditions and experienced many
post-op complications.1
Ms.
Long (plaintiff) sued the hospital, hospital system, and two RNs
(defendants) for failing to comply with the physician's order for an NG
tube and for "failure to properly treat...diagnose...and monitor" the
patient.1,2
The plaintiff alleged that after the tube was removed, she "aspirated
and significantly deteriorated," and that her post-op complications
resulted from the nurses' failure to comply with the NG tube order. The
plaintiff also alleged that the nurses failed in their duty to care for
the plaintiff, including a failure to follow policies and procedures.1 Whether the plaintiff was partly liable because she removed the tube herself was not addressed in this lawsuit.
Case dismissed, then appealed
The
defendants asked the trial court to dismiss the lawsuit because the
plaintiffs failed to provide an Affidavit of Merit. In New Jersey, the
hospital's location, the Affidavit of Merit statute requires that any
malpractice or negligence action against a licensed person in his or her
professional capacity must be supported with an affidavit by another
appropriate licensed person. The affidavit should state that it is
reasonably probable that the actions leading to the lawsuit did not
comply with acceptable professional standards or treatment practices.3 In other words, the plaintiff would need to have an expert's affidavit to support the claim of professional misconduct.
As
requested by the defendants, the court dismissed the case on the
grounds that the plaintiff had not submitted an expert's affidavit. The
plaintiff appealed this ruling.
Plaintiff prevails upon appeal
At
the appellate level, the plaintiff's lawyers argued that the trial
court was wrong in dismissing their case for lack of an affidavit. They
based this argument on the "common knowledge" exception to New Jersey's
Affidavit of Merit statute. Under this exception, an expert is not
needed if jurors and other laypersons could reasonably use their common
knowledge, understanding, and experience to determine whether the
defendants were negligent in their duties. The plaintiff argued that the
nurses' failure to reinsert the NG tube fell under the common knowledge
exception.
The appellate court agreed, ruling
that in this case, a layperson could use "ordinary understanding and
experience to determine a defendant's negligence, without the benefit of
the specialized knowledge of an expert."1
The Court reasoned that even though the general requirement is to have
an expert establish the standard of care and the breach of care in a
professional negligence claim, that requirement is not absolute. In
cases where individuals of average intelligence can assess the
carelessness of a defendant, an expert is not required.
The
court drew another distinction between this case and other cases where
experts are required. Most cases requiring an Affidavit of Merit concern
situations in which a professional defendant had taken some action
requiring an expert's professional opinion. In contrast, Ms. Long's case
involved an "act of omission," or a failure to act.
The
appellate court determined that the nurses' alleged failure to take
action by not alerting the physician that the NG tube was no longer in
place was obvious enough that a layperson would not need an expert's
assistance to determine the significance of the nurses' inaction,
especially given that a physician had ordered its insertion. The court
concluded that "common sense dictates that some action should have been
taken when the nurses were confronted with the sudden termination of
[the patient's] medical treatment that was required by the physician
charged with her care."1 The case was returned to the lower court for future trial or settlement.
The
appellate court's decision expresses no opinion about whether the
nurses were negligent or whether they should be found negligent when the
case goes to trial in the lower court. It means only that the
plaintiff's case against the defendants should not have been dismissed
because of the absence of expert testimony. The plaintiff will still
have to prove the merits of the case when she goes before the trial
court.
Discussion
Once an NG tube is
inserted, the RN's major responsibilities involve monitoring the patient
and managing the NG tube. The results of this monitoring will likely
indicate when the tube should be removed.4
It
is not uncommon for a patient to experience some discomfort during and
after NG tube insertion. It is also not unusual for patients to pull out
the NG tube before completion of the therapy, usually due to
discomfort.5
The
details and merits of the Long case will not be discussed here because
they were not litigated in the trial court. Instead, because the
hospital's records indicate that the patient removed the NG tube and
refused to permit reinsertion, this discussion will revolve around the
question of what constitutes a nurse's duty of care when a patient
refuses treatment.
For nurses, the obligation
to care begins once they accept the assignment to care for a patient.
Many of these obligations are regulated by the various state Nurse
Practice Acts and the laws of medical malpractice encompassing the duty
of care. That duty of care includes carrying out the healthcare
provider's prescriptions for the patient's therapy and communicating the
patient's condition to the multidisciplinary team, especially the
prescribing healthcare provider.6
Until a healthcare provider's prescription for a patient is terminated,
it remains in effect. If the plaintiff can establish that a breach of
duty led to patient injury, the professional will be found to be
negligent.7
To
be successful at trial, the plaintiff needs to prove all the elements
of negligence: that there was a duty of care by the defendants, that the
defendants breached that duty, that there was injury, and that the
breach was what caused the injury. In this case, the plaintiff needs to
prove that the nurses' failure to notify the physician led to the
plaintiff's injury and that the outcome would have been different if the
nurses had informed the physician.8
In
view of a competent adult's right to refuse treatment and the nurse's
duty to provide care, what should a nurse do when a patient has refused
treatment?
Follow the nursing process
Many
facts of this case were not discussed in court, so it is unclear
whether the nurses recognized the dangers of their alleged inaction or
why they took no action over several shifts and days. One must then view
the case from the perspective of what is common knowledge to nurses and
how that could have helped them resolve the issue. That common
knowledge is the nursing process.
Used
consistently, the nursing process is a tool that helps nurses provide
appropriate patient care. The five steps of the nursing process:
assessment, nursing diagnosis, planning, intervention, and evaluation
(ADPIE) provide an organized method of patient care that, if followed,
would guide a nurse along a path that gets the patient the right care
even when the nurse is not completely certain about what to do.
If the nurses in this case had utilized the nursing process, they could have found a solution with minimal delay. An assessment
would have shown that the tube had been removed. Further assessment,
through questioning the patient, would have revealed the reasons why the
patient removed the NG tube and why she did not want it back in. Those
reasons would have led them to a nursingdiagnosis, such as pain. Planning
would have given them a chance to determine how to address the
diagnosis so that the patient could accept the reinsertion as an intervention. Evaluation would include assessing the patient's comfort level following tube reinsertion.
If
the patient continued to refuse an NG tube, or if the reinsertion was
not successful or acceptable to the patient, then the next intervention
would have been to communicate the issue to other members of the team
and escalate it up the chain of command, documenting along the way who
was notified, when they were notified, and what actions were taken by
those up the command chain. This process should continue until the issue
is resolved.9
Communication
problems account for approximately one-third of malpractice cases
against nurses, with more than 75% resulting in serious injury or death.10 As in this case, any failure to communicate a patient's condition is potentially harmful.
Fundamental nursing responsibilities
Regardless
of whether the nurses are found to be negligent at trial, the big
takeaway from the appellate court's ruling is that the nurses'
obligation to communicate a patient's condition to a healthcare
provider, such as a physician, is so fundamental and simple to
understand that no expert affidavit is necessary. In other words, it is
common sense. As members of a multidisciplinary team, nurses must share
information. There is no "do nothing" option.
In this case, the nurses may or may not win in court. But doing nothing is what took them to court in the first place.
REFERENCES
1. Cowley v. Virtua Health System, 193 A.3d 330 (2018). [Context Link]
2. Law.com. Justices to take up "common knowledge" exception, duty to warn for outsourced parts. ALM Media. January 24, 2019. [Context Link]
If a nurse forces a treatment on a patient without their consent, the nurse can be charged with battery.
Correct!
law is something that we need to follow
nursing law ( protecting the patient )
what we can do what we cannot do
constitution
statues
administrative law (authoritiy -BRN given authority to NPA)
common law
regulation with nursing home
criminal law
prohibit conduct harmful to society
nurses can get trouble into civil law
practice medicine without license =criminal law
reasonable doubt
its mean that
tells that you are guilty
unreasonable
confusing
what happen if you break a law ( could be fine , jail , can be both )
sue you at the court
civil law ( your dog broke my fence )
liability ( 51 percent you did do it or you did not do it penality is monitoring )
intentional tortorts
assult and battery
false imprisonment
any unlawful confinment within fixed boundaries
false imprisonment
AMA ( they can sign the form ) Against medical advicement ( let them sign the form and let them go )
autonomy
Do i have a good doctor ( do you want information on other things ? )
invasion of privacy
you need to insert the urinary cathether ( you need to create a privacy )
you should ask for the permission ( if you need to teach your student nurse for doing catheterization)
better than not asking the question
i will allow enough student to get in
not the whole team
battery
assult
threat
you don't have to touch them
somefearful of you
contact this requirement
might happen
little lady dress
people are not happy
ICU ( their love one is not doing they are out of control )
charge with battery
she was charge with it
they was accusing her
here what she did
the patient was not doing well
my formal student
gentally guided
guiding them her body language,
can get into trouble
reasonable
better clam without touching
produre on someone ( inform consent ) can prevent some disturb
sometime the situation cannot prevent
matter of my routine ( they misinterpret)
when draw blood ( they saw can misinterpret too )
do not recuscitate ( the family tend to be not happy )
paramedic
implied concent ( i am going to take blood pressure)
explicit ( formal process for taking consent for surgery)
dementia patient
battery ( don't do it )
ER psy ward ( there are people patient tried to kill you )
its under standable ( ER taking care the psy background patient , patient decided stocking that he would call the unit ) she had to get restraining order )
almost victim of assult
group therapy session
witness the signature ( if the patient is not fully understand )
in emergency ( consent is presumed )
you have to get consent ( the patient is prenarcotic ) can you take consent ?
go something
the procedure is ready to do
the consent is missing
what you going to do first
you cannot give medication until consent have been giving
they have the ability to give consent
communication might not be always verbal ( speak english ) not in medical term
make the patient fully understanding ( basic understanding ) we need to keep information confidential
can you get access
tried to
grant access ( to student nurse )
you practice on your own license
you will not practice under anyother license
you are in elevator
not to gossip
confidential is regulated by fedral
we are mandatory report
what is you wrong
you have to report withing 24 hours
you are okay as long as you are in good faith
reported to authority
don't social media
getting caught up in the internet
deanza
found it entry pop up
she said finish my first sesmenster
not respecting the dignity of patient
a lot of company
really want the profile picture
she is in colorado
nursing school
football game
be mindful that
someplaces could disapline
no photograph in the unit ward
mal practice
under civil right ( not criminal )
legally responsible by the law
the patient sue you also doctor and hospital
joint liability (e.g , doctor accuse , you can challange and dr accuse you ,
you are not doing the right job )
bonus point
hospital fellout the window and die
safety mechanism ( hospital )
nurse ( responsible )
CNA ( RN is delegate , and responsible )
on the other hand
when you delegating your CNA
you still have to ---
negligence
professional part
mal practice
resonable and prudent
similar situation
resonable
prudent is careful
six component
(1) Duty to act
(2) breach ( you did not do it )
(3 ) harm ( something that happen )
(4) causation ( you breach )
wrong side of mal practice
to defense your self
look at those four words
standard of care
actually its causation
suppose somebody
suppose you are on break
suppose you went on your break you went break too long
yes it was damage
tipical lawyer defence
she was gone too long
preventing negligence and malpractice
the best way stay out of malpractice ( you know what you doing)
include the setting
it just seems like
crazy
so unsafe
if you feel like your license is risking you need to quit
you need to follow on unsafe situation ( quit from that job )
goal ( long term care )
identified the situation and work
rule of law
what is the best story
communication
patient family and other
it is been shown
there is alot of people could sue and some are don't
they concerned on nurses and sue
bad thing that happen
alot of people sue out of frastration becuse of being not listened
its very important
70 dollar a year ( insurance ) for patient sue in case
if you break policy
in adequate charting
case study
you need to document
two year later it went to the court
failure to communicate
read the legal case
this patient confuse
thats fine you do your job
doctor has to review patient medication
inaccurate counting of instument in surgery
you have to always
never ask yes no question
always identified
however , you have known this patient very well
time to get new one
that will be the case
acute care hospital
this is the part
this is something
medical
not guilty
that is clear
there is criminal behind this
make a mistake ( all the evidence )
cms report
here is a story
patient ( was order this drug for sedation ) versed but she was given vercuronium
float nurse
pyxis mechine ( ATM for drug )
for sedation
versed =midazolam
look at vial
take 10cc
small drug 2 ml
she gave the worng medication
she did not read the name
she did the wrong drug
it is criminal
that is mal practice
in history ( they did not do any thing bad , mal practice )
it was water soluble medication
including your breating
not for CT scan
they have to be on vantilator ( must be O2 ) quiet whole body down before intubation they give that medication
but she give the wrong medication
the medcation effect is they cannot open the eye
she could not even call for help
this is the case you are practice
your license
make a mistake and go to jail
you have to careful
reasonable and prudent
she was ICU nurse she was float nurse
even if the right drug she should have to monitor the patient respiratory
the original cause of death
malpractice -
criminally
group of nurse
the primary purpose is to protect public health and safety
a law that defines and controls nursing
state boards of nursing
California BRN
we are allow to do
it defines
sometimes they approched many programmed
thats why
get in trouble for that
AND ( allow natural death)
DNR ( do not resuscitate)
what they want
how aggressively they want to be treated
durable power ( daughter , spouse , family member )
you need to careful assessment ( need to evaluate the patient)
LTC ( they need to ensure compliance of law ,educate the staff )
information of patient ( you need to make sure the patient is allow his sister to know about his condition )
decline
non compliance
combactive
make the reader
don't use label
how can you lose your license
AKA
obtaining license by fraud
felony
somebody had DUI
seems to the board
they can
you might not get your license
VA
they don't let you in
background check
what are you doing
scare they don't get that license
not reporting ( you can be the one for lunch break )
falsely portraying self to public or any HCP as a nurse
they can give u license immediately ( after evaluate )
whistle blower act
good samaritan act
you saw a you tube video
if you do reasonable and prudent you won't get mal practice charge
1. My country is Myanmar which is situated in South East Asia , the majority of people are influenced by their culture, neighbors and the different group of people. Different states has influenced aspects of different culture such as language , education and knowledge .
2 . In my family ,we speak Burmese language as primary language at home but we speak other dialects language too.
3. In my country a family member parents is the one make the decisions especially who are breadwinner in the family is the one who made the healthcare decision.
4.The good things is that when someone sick the neighbors help you take
care , provide food , buy food and drinks and comfort words for the
sick person. It is sad to see sometimes people who has lack of healthcare knowledge they did not research and blindly listened to the others people such as neighbors . In some culture if they are ill or having fever they did not seek medical attention they just taking some natural remedy , and using some medicated leaves to apply in their long line of family. Some believe when they are ill they sleep in the bed and cover with full of blanket and sleeping whole day.
Have you been taught to value more independent or individualistic ideals and goals or to focus more on group goals?
What about social norms?
How has your culture shaped your social norms?
And finally, we discuss
social comparisons.
As we have an understanding of our own identity, we
often tend to compare ourselves with others around us.
2.1 – Self, Culture and Social Comparisons
Introduction
In this chapter, we focus on a few different components. The
first is self—or how we form our personal identity. From a developmental
perspective, when does a child first recognize themselves as a separate
being? At what point can we take the perspective of others and start to
empathize with them? What aspects of our identity are most salient to
us? As we think about our identity development, we also want to examine
the influence of culture. Throughout this section, you should reflect on
your personal culture and how it has shaped your identity. Have you
been taught to value more independent or individualistic ideals and
goals or to focus more on group goals? What about social norms? How has
your culture shaped your social norms? And finally, we discuss social
comparisons. As we have an understanding of our own identity, we often
tend to compare ourselves with others around us. We may engage in upward
or downward comparisons and these comparisons can impact our overall
self-esteem and self-concept.
Learning Objectives
Differentiate between the social actor, the motivated agent, and the autobiographical author
Identify the core differences between individualistic cultures and collectivist cultures
Differentiate between upward and downward social comparison and the impact each has on our self-esteem
Define the Frog Pond Effect and the Dunning-Kruger effect
In the Temple of Apollo at Delphi, the ancient Greeks inscribed the words: “Know thyself.”
For at least 2,500 years, and probably longer, human beings have
pondered the meaning of the ancient aphorism. Over the past century,
psychological scientists have joined the effort. They have formulated
many theories and tested countless hypotheses that speak to the central
question of human selfhood: How does a person know who he or she is?
The ancient Greeks seemed to realize that the self is inherently
—it reflects back on itself. In the disarmingly simple idea made famous by the great psychologist William James (1892/1963),
the self is what happens when “I” reflects back upon “Me.” The self is
both the I and the Me—it is the knower, and it is what the knower knows
when the knower reflects upon itself. When you look back at yourself,
what do you see? When you look inside, what do you find? Moreover, when
you try to change your self in some way, what is it that you are trying to change? The philosopher Charles Taylor (1989) describes the self as a reflexive project. In modern life, Taylor agues, we often try to manage, discipline, refine, improve, or develop the self. We work on
our selves, as we might work on any other interesting project. But what
exactly is it that we work on?Imagine for a moment that you have
decided to improve yourself. You might, say, go on a diet to
improve your appearance. Or you might decide to be nicer to your mother,
in order to improve that important social role. Or maybe the problem is
at work—you need to find a better job or go back to school to prepare
for a different career. Perhaps you just need to work harder. Or get
organized. Or recommit yourself to religion. Or maybe the key is to
begin thinking about your whole life story in a completely different
way, in a way that you hope will bring you more happiness, fulfillment,
peace, or excitement.Although there are many different ways you might
reflect upon and try to improve the self, it turns out that many, if not
most, of them fall roughly into three broad psychological categories (McAdams & Cox, 2010). The I may encounter the Me as (a) a social actor, (b) a motivated agent, or (c) an autobiographical author.
The Social Actor
Figure 2.2 The Shakespeare, High Street, London
In some ways people are just like actors on stage. We play roles and
follow scripts every day. The Shakespeare, High Street, London By: Brian
Source: FlickrCC BY-SA 2.0
Shakespeare tapped into a deep truth about human nature when he
famously wrote, “All the world’s a stage, and all the men and women
merely players.” He was wrong about the “merely,” however, for there is
nothing more important for human adaptation than the manner in which we
perform our roles as actors in the everyday theatre of social life. What
Shakespeare may have sensed but could not have fully understood is that
human beings evolved to live in social groups. Beginning with Darwin (1872/1965) and running through contemporary conceptions of human evolution, scientists have portrayed human nature as profoundly social (Wilson, 2012). For a few million years, Homo sapiens
and their evolutionary forerunners have survived and flourished by
virtue of their ability to live and work together in complex social
groups, cooperating with each other to solve problems and overcome
threats and competing with each other in the face of limited resources.
As social animals, human beings strive to get along and get ahead in the presence of each other (Hogan, 1982).
Evolution has prepared us to care deeply about social acceptance and
social status, for those unfortunate individuals who do not get along
well in social groups or who fail to attain a requisite status among
their peers have typically been severely compromised when it comes to
survival and reproduction. It makes consummate evolutionary sense,
therefore, that the human “I” should apprehend the “Me” first and
foremost as a social actor.
For human beings, the sense of the self as a social actor begins to
emerge around the age of 18 months. Numerous studies have shown that by
the time they reach their second birthday most toddlers recognize
themselves in mirrors and other reflecting devices (Lewis & Brooks-Gunn, 1979; Rochat, 2003).
What they see is an embodied actor who moves through space and time.
Many children begin to use words such as “me” and “mine” in the second
year of life, suggesting that the I now has linguistic labels that can
be applied reflexively to itself: I call myself “me.” Around the same
time, children also begin to express social emotions such as
embarrassment, shame, guilt, and pride (Tangney, Stuewig, & Mashek, 2007).
These emotions tell the social actor how well he or she is performing
in the group. When I do things that win the approval of others, I feel
proud of myself. When I fail in the presence of others, I may feel
embarrassment or shame. When I violate a social rule, I may experience
guilt, which may motivate me to make amends.
Many of the classic psychological theories of human selfhood point to
the second year of life as a key developmental period. For example,
Freud (1923/1961) and his followers in the psychoanalytic tradition traced the emergence of an autonomous
back to the second year. Freud used the term “ego” (in German das Ich, which also translates into “the I”) to refer to an executive self in the personality. Erikson (1963)
argued that experiences of trust and interpersonal attachment in the
first year of life help to consolidate the autonomy of the ego in the
second. Coming from a more sociological perspective, Mead (1934)
suggested that the I comes to know the Me through reflection, which may
begin quite literally with mirrors but later involves the reflected
appraisals of others. I come to know who I am as a social actor, Mead
argued, by noting how other people in my social world react to
my performances. In the development of the self as a social actor, other
people function like mirrors—they reflect who I am back to me.
Research has shown that when young children begin to make attributions about themselves, they start simple (Harter, 2006).
At age 4, Jessica knows that she has dark hair, knows that she lives in
a white house, and describes herself to others in terms of simple
behavioral traits. She may say that she is “nice,” or
“helpful,” or that she is “a good girl most of the time.” By the time,
she hits fifth grade (age 10), Jessica sees herself in more complex
ways, attributing traits to the self such as “honest,” “moody,”
“outgoing,” “shy,” “hard-working,” “smart,” “good at math but not gym
class,” or “nice except when I am around my annoying brother.” By late
childhood and early adolescence, the personality traits that people
attribute to themselves, as well as those attributed to them by others,
tend to correlate with each other in ways that conform to a
well-established taxonomy of five broad trait domains, repeatedly
derived in studies of adult personality and often called the
: (1) extraversion, (2) neuroticism, (3) agreeableness, (4) conscientiousness, and (5) openness to experience (Roberts, Wood, & Caspi, 2008). By late childhood, moreover, self-conceptions will likely also include important social roles: “I am a good student,” “I am the oldest daughter,” or “I am a good friend to Sarah.”
Traits and roles, and variations on these notions, are the main currency of the
(McAdams & Cox, 2010).
Trait terms capture perceived consistencies in social performance. They
convey what I reflexively perceive to be my overall acting style, based
in part on how I think others see me as an actor in many different
social situations. Roles capture the quality, as I perceive it, of
important structured relationships in my life. Taken together, traits
and roles make up the main features of my
If you have ever tried hard to change yourself, you may have taken
aim at your social reputation, targeting your central traits or your
social roles. Maybe you woke up one day and decided that you must become
a more optimistic and emotionally upbeat person. Taking into
consideration the reflected appraisals of others, you realized that even
your friends seem to avoid you because you bring them down. In
addition, it feels bad to feel so bad all the time: Wouldn’t it be
better to feel good, to have more energy and hope? In the language of
traits, you have decided to “work on” your “neuroticism.” Or maybe
instead, your problem is the trait of “conscientiousness”: You are
undisciplined and don’t work hard enough, so you resolve to make changes
in that area. Self-improvement efforts such as these—aimed at changing
one’s traits to become a more effective social actor—are sometimes
successful, but they are very hard—kind of like dieting. Research
suggests that broad traits tend to be stubborn, resistant to change,
even with the aid of psychotherapy. However, people often have more
success working directly on their social roles. To become a more
effective social actor, you may want to take aim at the important roles
you play in life. What can I do to become a better son or daughter? How
can I find new and meaningful roles to perform at work, or in my family,
or among my friends, or in my church and community? By doing concrete
things that enrich your performances in important social roles, you may
begin to see yourself in a new light, and others will notice the change,
too. Social actors hold the potential to transform their performances
across the human life course. Each time you walk out on stage, you have a
chance to start anew.
The Motivated Agent
Figure 2.3
When we observe others we only see how they act but are never able to
access the entirety of their internal experience. By: CC0 Public Domain CCO 1.0
Whether we are talking literally about the theatrical stage or more
figuratively, as I do in this module, about the everyday social
environment for human behavior, observers can never fully know what is
in the actor’s head, no matter how closely they watch. We can see actors
act, but we cannot know for sure what they want or what they value,
unless they tell us straightaway. As a social actor, a person may come
across as friendly and compassionate, or cynical and mean-spirited, but
in neither case can we infer their motivations from their traits or
their roles. What does the friendly person want? What is the cynical
father trying to achieve? Many broad psychological theories of the self
prioritize the motivational qualities of human behavior—the inner needs,
wants, desires, goals, values, plans, programs, fears, and aversions
that seem to give behavior its direction and purpose (Bandura, 1989; Deci & Ryan, 1991; Markus & Nurius, 1986). These kinds of theories explicitly conceive of the self as a motivated agent.
To be an agent is to act with direction and purpose, to move forward
into the future in pursuit of self-chosen and valued goals. In a sense,
human beings are agents even as infants, for babies can surely act in
goal-directed ways. By age 1 year, moreover, infants show a strong
preference for observing and imitating the goal-directed, intentional
behavior of others, rather than random behaviors (Woodward, 2009).
Still, it is one thing to act in goal-directed ways; it is quite
another for the I to know itself (the Me) as an intentional and
purposeful force who moves forward in life in pursuit of self-chosen
goals, values, and other desired end states. In order to do so, the
person must first realize that people indeed have desires and goals in
their minds and that these inner desires and goals motivate
(initiate, energize, put into motion) their behavior. According to a
strong line of research in developmental psychology, attaining this kind
of understanding means acquiring a
(Wellman, 1993),
which occurs for most children by the age of 4. Once a child
understands that other people’s behavior is often motivated by inner
desires and goals, it is a small step to apprehend the self in similar
terms.
Building on theory of mind and other cognitive and social
developments, children begin to construct the self as a motivated agent
in the elementary school years, layered over their still-developing
sense of themselves as social actors. Theory and research on what
developmental psychologists call
converge
to suggest that children become more planful, intentional, and
systematic in their pursuit of valued goals during this time (Sameroff & Haith, 1996).
Schooling reinforces the shift in that teachers and curricula place
increasing demands on students to work hard, adhere to schedules, focus
on goals, and achieve success in particular, well-defined task domains.
Their relative success in achieving their most cherished goals,
furthermore, goes a long way in determining children’s
(Robins, Tracy, & Trzesniewski, 2008).
Motivated agents feel good about themselves to the extent they believe
that they are making good progress in achieving their goals and
advancing their most important values.
Goals and values become even more important for the self in adolescence, as teenagers begin to confront what Erikson (1963) famously termed the developmental challenge of
.
For adolescents and young adults, establishing a psychologically
efficacious identity involves exploring different options with respect
to life goals, values, vocations, and intimate relationships and
eventually committing to a motivational and ideological agenda for adult
life—an integrated and realistic sense of what I want and value in life
and how I plan to achieve it (Kroger & Marcia, 2011). Committing oneself to an integrated suite of life goals and values is perhaps the greatest achievement for the
.
Establishing an adult identity has implications, as well, for how a
person moves through life as a social actor, entailing new role
commitments and, perhaps, a changing understanding of one’s basic
dispositional traits. According to Erikson, however, identity
achievement is always provisional, for adults continue to work on their
identities as they move into midlife and beyond, often relinquishing old
goals in favor of new ones, investing themselves in new projects and
making new plans, exploring new relationships, and shifting their
priorities in response to changing life circumstances (Freund & Riediger, 2006; Josselson, 1996).
There is a sense whereby any time you try to change
yourself, you are assuming the role of a motivated agent. After all, to
strive to change something is inherently what an agent does. However,
what particular feature of selfhood you try to change may correspond to
your self as actor, agent, or author, or some combination. When you try
to change your traits or roles, you take aim at the social actor. By
contrast, when you try to change your values or life goals, you are
focusing on yourself as a motivated agent. Adolescence and young
adulthood are periods in the human life course when many of us focus
attention on our values and life goals. Perhaps you grew up as a
traditional Catholic, but now in college you believe that the values
inculcated in your childhood no longer function so well for you. You no
longer believe in the central tenets of the Catholic Church, say, and
are now working to replace your old values with new ones. Or maybe you
still want to be Catholic, but you feel that your new take on faith
requires a different kind of personal ideology. In the realm of the
motivated agent, moreover, changing values can influence life goals. If
your new value system prioritizes alleviating the suffering of others,
you may decide to pursue a degree in social work, or to become a public
interest lawyer, or to live a simpler life that prioritizes people over
material wealth. A great deal of the identity work we do in adolescence
and young adulthood is about values and goals, as we strive to
articulate a personal vision or dream for what we hope to accomplish in
the future.
The Autobiographical Author
Even as
continues to develop a sense of
as both a social actor and a motivated agent, a third standpoint for
selfhood gradually emerges in the adolescent and early-adult years. The
third perspective is a response to Erikson’s (1963)
challenge of identity. According to Erikson, developing an identity
involves more than the exploration of and commitment to life goals and
values (the self as motivated agent), and more than committing to new
roles and re-evaluating old traits (the self as social actor). It also
involves achieving a sense of temporal continuity in life—a reflexive understanding of how I have come to be the person I am becoming,
or put differently, how my past self has developed into my present
self, and how my present self will, in turn, develop into an envisioned
future self. In his analysis of identity formation in the life of the
15th-century Protestant reformer Martin Luther, Erikson (1958) describes the culmination of a young adult’s search for identity in this way:
“To be adult means among other things to see one’s own life in
continuous perspective, both in retrospect and prospect. By accepting
some definition of who he is, usually on the basis of a function in an
economy, a place in the sequence of generations, and a status in the
structure of society, the adult is able to selectively reconstruct
his past in such a way that, step for step, it seems to have planned
him, or better, he seems to have planned it. In this sense,
psychologically we do choose our parents, our family history, and the
history of our kings, heroes, and gods. By making them our own, we
maneuver ourselves into the inner position of proprietors, of creators.”
— (Erikson, 1958, pp. 111–112; emphasis added).
In this rich passage, Erikson intimates that the development of a
mature identity in young adulthood involves the I’s ability to construct
a retrospective and prospective story about the Me (McAdams, 1985).
In their efforts to find a meaningful identity for life, young men and
women begin “to selectively reconstruct” their past, as Erikson wrote,
and imagine their future to create an integrative life story, or what
psychologists today often call a
.
A narrative identity is an internalized and evolving story of the self
that reconstructs the past and anticipates the future in such a way as
to provide a person’s life with some degree of unity, meaning, and
purpose over time (McAdams, 2008; McLean, Pasupathi, & Pals, 2007). The self typically becomes an autobiographical author
in the early-adult years, a way of being that is layered over the
motivated agent, which is layered over the social actor. In order to
provide life with the sense of temporal continuity and deep meaning that
Erikson believed identity should confer, we must author a personalized
life story that integrates our understanding of who we once were, who we
are today, and who we may become in the future. The story helps to
explain, for the author and for the author’s world, why the social actor
does what it does and why the motivated agent wants what it wants, and
how the person as a whole has developed over time, from the past’s
reconstructed beginning to the future’s imagined ending.
By the time they are 5 or 6 years of age, children can tell well-formed stories about personal events in their lives (Fivush, 2011).
By the end of childhood, they usually have a good sense of what a
typical biography contains and how it is sequenced, from birth to death (Thomsen & Bernsten, 2008).
But it is not until adolescence, research shows, that human beings
express advanced storytelling skills and what psychologists call
(Habermas & Bluck, 2000; McLean & Fournier, 2008).
In autobiographical reasoning, a narrator is able to derive substantive
conclusions about the self from analyzing his or her own personal
experiences. Adolescents may develop the ability to string together
events into causal chains and inductively derive general themes about
life from a sequence of chapters and scenes (Habermas & de Silveira, 2008).
For example, a 16-year-old may be able to explain to herself and to
others how childhood experiences in her family have shaped her vocation
in life. Her parents were divorced when she was 5 years old, the
teenager recalls, and this caused a great deal of stress in her family.
Her mother often seemed anxious and depressed, but she (the now-teenager
when she was a little girl—the story’s protagonist) often tried to
cheer her mother up, and her efforts seemed to work. In more recent
years, the teenager notes that her friends often come to her with their
boyfriend problems. She seems to be very adept at giving advice about
love and relationships, which stems, the teenager now believes, from her
early experiences with her mother. Carrying this causal narrative
forward, the teenager now thinks that she would like to be a marriage
counselor when she grows up.
Figure 2.4 2014 Edmonton Pride Parade Young
people often “try on” many variations of identities to see which best
fits their private sense of themselves. 2014 Edmonton Pride Parade. By:
Sangudo Source: FickrCC BY-NC-SA 2.0
Unlike children, then, adolescents can tell a full and convincing
story about an entire human life, or at least a prominent line of
causation within a full life, explaining continuity and change in the
story’s protagonist over time. Once the cognitive skills are in place,
young people seek interpersonal opportunities to share and refine their
developing sense of themselves as storytellers (the I) who tell stories
about themselves (the Me). Adolescents and young adults author a
narrative sense of the self by telling stories about their experiences
to other people, monitoring the feedback they receive from the tellings,
editing their stories in light of the feedback, gaining new experiences
and telling stories about those, and on and on, as selves create
stories that, in turn, create new selves (McLean et al., 2007).
Gradually, in fits and starts, through conversation and introspection,
the I develops a convincing and coherent narrative about the Me.
Contemporary research on the
emphasizes the strong effect of culture on narrative identity (Hammack, 2008).
Culture provides a menu of favored plot lines, themes, and character
types for the construction of self-defining life stories.
Autobiographical authors sample selectively from the cultural menu,
appropriating ideas that seem to resonate well with their own life
experiences. As such, life stories reflect the culture, wherein they are
situated as much as they reflect the authorial efforts of the
autobiographical I.
As one example of the tight link between culture and narrative identity, McAdams (2013) and others (e.g., Kleinfeld, 2012) have highlighted the prominence of
in American culture. Epitomized in such iconic cultural ideals as the
American dream, Horatio Alger stories, and narratives of Christian
atonement, redemptive stories track the move from suffering to an
enhanced status or state, while scripting the development of a chosen
protagonist who journeys forth into a dangerous and unredeemed world (McAdams, 2013).
Hollywood movies often celebrate redemptive quests. Americans are
exposed to similar narrative messages in self-help books, 12-step
programs, Sunday sermons, and in the rhetoric of political campaigns.
Over the past two decades, the world’s most influential spokesperson for
the power of redemption in human lives may be Oprah Winfrey, who tells
her own story of overcoming childhood adversity while encouraging
others, through her media outlets and philanthropy, to tell similar
kinds of stories for their own lives (McAdams, 2013).
Research has demonstrated that American adults who enjoy high levels of
mental health and civic engagement tend to construct their lives as
narratives of redemption, tracking the move from sin to salvation, rags
to riches, oppression to liberation, or sickness/abuse to
health/recovery (McAdams, Diamond, de St. Aubin, & Mansfield, 1997; McAdams, Reynolds, Lewis, Patten, & Bowman, 2001; Walker & Frimer, 2007). In American society, these kinds of stories are often seen to be inspirational.
At the same time, McAdams (2011, 2013)
has pointed to shortcomings and limitations in the redemptive stories
that many Americans tell, which mirror cultural biases and stereotypes
in American culture and heritage. McAdams has argued that redemptive
stories support happiness and societal engagement for some Americans,
but the same stories can encourage moral righteousness and a naïve
expectation that suffering will always be redeemed. For better and
sometimes for worse, Americans seem to love stories of personal
redemption and often aim to assimilate their autobiographical memories
and aspirations to a redemptive form. Nonetheless, these same stories
may not work so well in cultures that espouse different values and
narrative ideals (Hammack, 2008).
It is important to remember that every culture offers its own
storehouse of favored narrative forms. It is also essential to know that
no single narrative form captures all that is good (or bad) about a
culture. In American society, the redemptive narrative is but one of
many different kinds of stories that people commonly employ to make
sense of their lives.
What is your story? What kind of a narrative are you working on? As
you look to the past and imagine the future, what threads of continuity,
change, and meaning do you discern? For many people, the most dramatic
and fulfilling efforts to change the self happen when the I works hard,
as an autobiographical author, to construct and, ultimately, to tell a
new story about the Me. Storytelling may be the most powerful form of
self-transformation that human beings have ever invented. Changing one’s
life story is at the heart of many forms of psychotherapy and
counseling, as well as religious conversions, vocational epiphanies, and
other dramatic transformations of the self that people often celebrate
as turning points in their lives (Adler, 2012).
Storytelling is often at the heart of the little changes, too, minor
edits in the self that we make as we move through daily life, as we live
and experience life, and as we later tell it to ourselves and to
others.
Conclusion
For human beings, selves begin as social actors, but they eventually
become motivated agents and autobiographical authors, too. The I first
sees itself as an embodied actor in social space; with development,
however, it comes to appreciate itself also as a forward-looking source
of self-determined goals and values, and later yet, as a storyteller of
personal experience, oriented to the reconstructed past and the imagined
future. To “know thyself” in mature adulthood, then, is to do three
things: (a) to apprehend and to perform with social approval my
self-ascribed traits and roles, (b) to pursue with vigor and (ideally)
success my most valued goals and plans, and (c) to construct a story
about life that conveys, with vividness and cultural resonance, how I
became the person I am becoming, integrating my past as I remember it,
my present as I am experiencing it, and my future as I hope it to be.
Culture
Introduction
When you think about different cultures, you likely picture their
most visible features, such as differences in the way people dress, or
in the architectural styles of their buildings. You might consider
different types of food, or how people in some cultures eat with
chopsticks while people in others use forks. There are differences in
body language, religious practices, and wedding rituals. While these are
all obvious examples of cultural differences, many distinctions are
harder to see because they are psychological in nature.
Figure 2.5 RnR Collection & FAREEDA
Culture goes beyond the way people dress and the food they eat. It also
stipulates morality, identity, and social roles. RnR Collecton &
FAREEDA. By: Faizal Riza MOHD RAF Source: FlickrCC BY-NC 2.0
Just as culture can be seen in dress and food, it can also be seen in
morality, identity, and gender roles. People from around the world
differ in their views of premarital sex, religious tolerance, respect
for elders, and even the importance they place on having fun. Similarly,
many behaviors that may seem innate are actually products of culture.
Approaches to punishment, for example, often depend on cultural norms
for their effectiveness. In the United States, people who ride public
transportation without buying a ticket face the possibility of being
fined. By contrast, in some other societies, people caught dodging the
fare are socially shamed by having their photos posted publicly. The
reason this campaign of “name and shame” might work in one society but
not in another is that members of different cultures differ in how
comfortable they are with being singled out for attention. This strategy
is less effective for people who are not as sensitive to the threat of
public shaming.
The psychological aspects of culture are often overlooked because
they are often invisible. The way that gender roles are learned is a
cultural process as is the way that people think about their own sense
of duty toward their family members. In this module, you will be
introduced to one of the most fascinating aspects of social psychology:
the study of cultural processes. You will learn about research methods
for studying culture, basic definitions related to this topic, and about
the ways that culture affects a person’s sense of self.
Learning Objectives
Appreciate culture as an evolutionary adaptation common to all humans.
Understand cultural processes as variable patterns rather than as fixed scripts.
Understand the difference between cultural and cross-cultural research methods.
Appreciate cultural awareness as a source of personal well-being, social responsibility, and social harmony.
Explain the difference between individualism and collectivism.
Define “self-construal” and provide a real life example.
Social Psychology Research Methods
Social psychologists are interested in the ways that cultural forces
influence psychological processes. They study culture as a means of
better understanding the ways it affects our emotions, identity,
relationships, and decisions. Social psychologists generally ask
different types of questions and use different methods than do
anthropologists. Anthropologists are more likely to conduct
.
In this type of research, the scientist spends time observing a culture
and conducting interviews. In this way, anthropologists often attempt
to understand and appreciate culture from the point of view of the
people within it. Social psychologists who adopt this approach are often
thought to be studying
. They are likely to use interviews as a primary research methodology.
For example, in a 2004 study
Hazel Markus and her colleagues wanted to explore class culture as it
relates to well-being. The researchers adopted a cultural psychology
approach and interviewed participants to discover—in the participants
own words—what “the good life” is for Americans of different social
classes. Dozens of participants answered 30
about well-being during recorded, face-to-face interviews. After the
interview data were collected the researchers then read the transcripts.
From these, they agreed on common themes that appeared important to the
participants. These included, among others, “health,” “family,”
“enjoyment,” and “financial security.”
The Markus team discovered that people with a Bachelor’s Degree were
more likely than high school educated participants to mention
“enjoyment” as a central part of the good life. By contrast, those with a
high school education were more likely to mention “financial security”
and “having basic needs met.” There were similarities as well:
participants from both groups placed a heavy emphasis on relationships
with others. Their understanding of how these relationships are
tied to well-being differed, however. The college educated—especially
men—were more likely to list “advising and respecting” as crucial
aspects of relationships while their high school educated counterparts
were more likely to list “loving and caring” as important. As you can
see, cultural psychological approaches place an emphasis on the
participants’ own definitions, language, and understanding of their own
lives. In addition, the researchers were able to make comparisons
between the groups, but these comparisons were based on loose themes
created by the researchers.
Cultural psychology is distinct from
, and this can be confusing.
are those that use standard forms of measurement, such as Likert
scales, to compare people from different cultures and identify their
differences. Both cultural and cross-cultural studies have their own
advantages and disadvantages (see Table 1).
Table 2.1: Summary of advantages and disadvantages of ethnographic study and cross-cultural study.
Interestingly, researchers—and the rest of us!—have as much to learn from
as , and both require comparisons across cultures. For example, Diener and Oishi (2000)
were interested in exploring the relationship between money and
happiness. They were specifically interested in cross-cultural
differences in levels of life satisfaction between people from different
cultures. To examine this question they used international surveys that
asked all participants the exact same question, such as “All things
considered, how satisfied are you with your life as a whole these days?”
and used a
for answers; in this case one that asked people to use a 1-10 scale to
respond. They also collected data on average income levels in each
nation, and adjusted these for local differences in how many goods and
services that money can buy.
The Diener research team discovered that, across more than 40 nations
there was a tendency for money to be associated with higher life
satisfaction. People from richer countries such as Denmark, Switzerland
and Canada had relatively high satisfaction while their counterparts
from poorer countries such as India and Belarus had lower levels. There
were some interesting exceptions, however. People from Japan—a wealthy
nation—reported lower satisfaction than did their peers in nations with
similar wealth. In addition, people from Brazil—a poorer nation—had
unusually high scores compared to their income counterparts.
One problem with cross-cultural studies is that they are vulnerable to
This
means that the researcher who designs the study might be influenced by
personal biases that could affect research outcomes—without even being
aware of it. For example, a study on happiness across cultures might
investigate the ways that personal freedom is associated with feeling a
sense of purpose in life. The researcher might assume that when people
are free to choose their own work and leisure, they are more likely to
pick options they care deeply about. Unfortunately, this researcher
might overlook the fact that in much of the world it is considered
important to sacrifice some personal freedom in order to fulfill one’s
duty to the group (Triandis, 1995). Because of the danger of this type of bias, social psychologists must continue to improve their methodology.
What is Culture?
Defining Culture
Like the words “happiness” and “intelligence,” the word “culture” can be tricky to define.
is a word that suggests social patterns of shared meaning.
In essence, it is a collective understanding of the way the world
works, shared by members of a group and passed down from one generation
to the next. For example, members of the Yanomamö tribe, in South
America, share a cultural understanding of the world that includes the
idea that there are four parallel levels to reality that include an
abandoned level, and earthly level and heavenly and hell-like levels.
Similarly, members of surfing culture understand their athletic pastime
as being worthwhile and governed by formal rules of etiquette known only
to insiders. There are several features of culture that are central to
understanding the uniqueness and diversity of the human mind:
Versatility: Culture can change and adapt. Someone from the
state of Orissa, in India, for example, may have multiple identities.
She might see herself as Oriya when at home and speaking her native
language. At other times, such as during the national cricket match
against Pakistan, she might consider herself Indian. This is known as
Sharing:
Culture is the product of people sharing with one another. Humans
cooperate and share knowledge and skills with other members of their
networks. The ways they share, and the content of what they share, helps
make up culture. Older adults, for instance, remember a time when
long-distance friendships were maintained through letters that arrived
in the mail every few months. Contemporary youth culture accomplishes
the same goal through the use of instant text messages on smart phones.
Accumulation: Cultural knowledge is cumulative. That is,
information is “stored.” This means that a culture’s collective learning
grows across generations. We understand more about the world today than
we did 200 years ago, but that doesn’t mean the culture from long ago
has been erased by the new. For instance, members of the Haida culture—a
First Nations people in British Columbia, Canada—profit from both
ancient and modern experiences. They might employ traditional fishing
practices and wisdom stories while also using modern technologies and
services.
Patterns: There are systematic and predictable ways of
behavior or thinking across members of a culture. Patterns emerge from
adapting, sharing, and storing cultural information. Patterns can be
both similar and different across cultures. For example, in both Canada
and India it is considered polite to bring a small gift to a host’s
home. In Canada, it is more common to bring a bottle of wine and for the
gift to be opened right away. In India, by contrast, it is more common
to bring sweets, and often the gift is set aside to be opened later.
Understanding the changing nature of culture is the first step toward appreciating how it helps people. The concept of
is the ability to understand why members of other cultures act in the
ways they do. Rather than dismissing foreign behaviors as weird,
inferior, or immoral, people high in cultural intelligence can
appreciate differences even if they do not necessarily share another
culture’s views or adopt its ways of doing things.
Thinking about Culture
One of the biggest problems with understanding culture is that the
word itself is used in different ways by different people. When someone
says, “My company has a competitive culture,” does it mean the same
thing as when another person says, “I’m taking my children to the museum
so they can get some culture”? The truth is, there are many ways to
think about culture. Here are three ways to parse this concept:
Progressive cultivation: This refers to a relatively small
subset of activities that are intentional and aimed at “being refined.”
Examples include learning to play a musical instrument, appreciating
visual art, and attending theater performances, as well as other
instances of so-called “high art.” This was the predominant use of the
word culture through the mid-19th century. This notion of culture formed
the basis, in part, of a superior mindset on the behalf of people from
the upper economic classes. For instance, many tribal groups were seen
as lacking cultural sophistication under this definition. In the late
19th century, as global travel began to rise, this understanding of
culture was largely replaced with an understanding of it as a way of
life.
Ways of Life: This refers to distinct patterns of beliefs
and behaviors widely shared among members of a culture. The “ways of
life” understanding of culture shifts the emphasis to patterns of belief
and behavior that persist over many generations. Although cultures can
be small—such as “school culture”—they usually describe larger
populations, such as nations. People occasionally confuse national
identity with culture. There are similarities in culture between Japan,
China, and Korea, for example, even though politically they are very
different. Indeed, each of these nations also contains a great deal of
cultural variation within themselves.
Shared Learning: In the 20th century, anthropologists and social psychologists developed the concept of
to refer to the ways people learn about and shared cultural knowledge.
Where “ways of life” is treated as a noun “enculturation” is a verb.
That is, enculturation is a fluid and dynamic process. That is, it
emphasizes that culture is a process that can be learned. As children
are raised in a society, they are taught how to behave according to
regional cultural norms. As immigrants settle in a new country, they
learn a new set of rules for behaving and interacting. In this way, it
is possible for a person to have multiple
.
Table 2.2: Culture concepts and their application
The understanding of culture as a learned pattern of views and
behaviors is interesting for several reasons. First, it highlights the
ways groups can come into conflict with one another. Members of
different cultures simply learn different ways of behaving. Modern youth
culture, for instance, interacts with technologies such as smart phones
using a different set of rules than people who are in their 40s, 50s,
or 60s. Older adults might find texting in the middle of a face-to-face
conversation rude while younger people often do not. These differences
can sometimes become politicized and a source of tension between groups.
One example of this is Muslim women who wear a hijab, or head
scarf. Non-Muslims do not follow this practice, so occasional
misunderstandings arise about the appropriateness of the tradition.
Second, understanding that culture is learned is important because it
means that people can adopt an appreciation of patterns of behavior that
are different than their own. For example, non-Muslims might find it
helpful to learn about the hijab. Where did this tradition come from?
What does it mean and what are various Muslim opinions about wearing
one? Finally, understanding that culture is learned can be helpful in
developing self-awareness. For instance, people from the United States
might not even be aware of the fact that their attitudes about public
nudity are influenced by their cultural learning. While women often go
topless on beaches in Europe and women living a traditional tribal
existence in places like the South Pacific also go topless, it is
illegal for women in some of the United States to do so. These cultural
norms for modesty—reflected in government laws and policies– also enter
the discourse on social issues such as the appropriateness of
breast-feeding in public. Understanding that your preferences are—in
many cases—the products of cultural learning might empower you to revise
them if doing so will lead to a better life for you or others.
The Self and Culture
Figure 2.6 Bowl of the Buddhist priest In
a world that is increasingly connected by travel, technology, and
business the ability to understand and appreciate the differences
between cultures is more important than ever. Psychologists call this
capability “cultural intelligence”. Bowl of the Buddhist priest. Source:
FlickrCCO 1.0Traditionally,
social psychologists have thought about how patterns of behavior have
an overarching effect on populations’ attitudes. Harry Triandis, a
cross-cultural psychologist, has studied culture in terms of
individualism and collectivism. Triandis became interested in culture
because of his unique upbringing. Born in Greece, he was raised under
both the German and Italian occupations during World War II. The Italian
soldiers broadcast classical music in the town square and built a
swimming pool for the townspeople. Interacting with these
foreigners—even though they were an occupying army—sparked Triandis’
curiosity about other cultures. He realized that he would have to learn
English if he wanted to pursue academic study outside of Greece and so
he practiced with the only local who knew the language: a mentally ill
70 year old who was incarcerated for life at the local hospital. He went
on to spend decades studying the ways people in different cultures
define themselves (Triandis, 2008).
So, what exactly were these two patterns of culture Triandis focused on:
and
?
Individualists, such as most people born and raised in Australia or the
United States, define themselves as individuals. They seek personal
freedom and prefer to voice their own opinions and make their own
decisions. By contrast, collectivists—such as most people born and
raised in Korea or in Taiwan— are more likely to emphasize their
connectedness to others. They are more likely to sacrifice their
personal preferences if those preferences come in conflict with the
preferences of the larger group (Triandis, 1995).
Both individualism and collectivism can further be divided into vertical and horizontal dimensions (Triandis, 1995).
Essentially, these dimensions describe social status among members of a
society. People in vertical societies differ in status, with some
people being more highly respected or having more privileges, while in
horizontal societies people are relatively equal in status and
privileges. These dimensions are, of course, simplifications.
Neither individualism nor collectivism is the “correct way to live.”
Rather, they are two separate patterns with slightly different emphases.
People from individualistic societies often have more social freedoms,
while collectivistic societies often have better social safety nets.
Table 2.3: Individualist and collectivist cultures
There are yet other ways of thinking about culture, as well. The
cultural patterns of individualism and collectivism are linked to an
important psychological phenomenon: the way that people understand
themselves. Known as
,
this is the way people define the way they “fit” in relation to others.
Individualists are more likely to define themselves in terms of an .
This means that people see themselves as A) being a unique individual
with a stable collection of personal traits, and B) that these traits
drive behavior. By contrast, people from collectivist cultures are more
likely to identify with the
.
This means that people see themselves as A) defined differently in each
new social context and B) social context, rather than internal traits,
are the primary drivers of behavior (Markus & Kitiyama, 1991).
What do the independent and interdependent self look like in daily
life? One simple example can be seen in the way that people describe
themselves. Imagine you had to complete the sentence starting with “I
am…..”. And imagine that you had to do this 10 times. People with an
independent sense of self are more likely to describe themselves in
terms of traits such as “I am honest,” “I am intelligent,” or “I am
talkative.” On the other hand, people with a more interdependent sense
of self are more likely to describe themselves in terms of their
relation to others such as “I am a sister,” “I am a good friend,” or “I
am a leader on my team” (Markus, 1977).
The psychological consequences of having an independent or
interdependent self can also appear in more surprising ways. Take, for
example, the emotion of anger. In Western cultures, where people are
more likely to have an independent self, anger arises when people’s
personal wants, needs, or values are attacked or frustrated (Markus & Kitiyama, 1994).
Angry Westerners sometimes complain that they have been “treated
unfairly.” Simply put, anger—in the Western sense—is the result of
violations of the self. By contrast, people from interdependent self
cultures, such as Japan, are likely to experience anger somewhat
differently. They are more likely to feel that anger is unpleasant not
because of some personal insult but because anger represents a lack of
harmony between people. In this instance, anger is particularly
unpleasant when it interferes with close relationships.
Culture is Learned
It’s important to understand that culture is learned. People aren’t
born using chopsticks or being good at soccer simply because they have a
genetic predisposition for it. They learn to excel at these activities
because they are born in countries like Argentina, where playing soccer
is an important part of daily life, or in countries like Taiwan, where
chopsticks are the primary eating utensils. So, how are such cultural
behaviors learned? It turns out that cultural skills and knowledge are
learned in much the same way a person might learn to do algebra or knit.
They are acquired through a combination of explicit teaching and
implicit learning—by observing and copying.
Cultural teaching can take many forms. It begins with parents and
caregivers, because they are the primary influence on young children.
Caregivers teach kids, both directly and by example, about how to behave
and how the world works. They encourage children to be polite,
reminding them, for instance, to say “Thankyou.” They teach kids how to
dress in a way that is appropriate for the culture. They introduce
children to religious beliefs and the rituals that go with them. They
even teach children how to think and feel! Adult men, for example, often
exhibit a certain set of emotional expressions—such as being tough and
not crying—that provides a model of masculinity for their children. This
is why we see different ways of expressing the same emotions in
different parts of the world.
Figure 2.7 Brazil and Colombia match at the FIFA World Cup Culture
teaches us what behaviors and emotions are appropriate or expected in
different situations. Brazil and Colombia match at the FIFA World Cup.
By: Portal de Copa Source: WikimediacommonsCC BY 3.0
In some societies, it is considered appropriate to conceal anger.
Instead of expressing their feelings outright, people purse their lips,
furrow their brows, and say little. In other cultures, however, it is
appropriate to express anger. In these places, people are more likely to
bare their teeth, furrow their brows, point or gesture, and yell (Matsumoto, Yoo, & Chung, 2010).
Such patterns of behavior are learned. Often, adults are not even aware
that they are, in essence, teaching psychology—because the lessons are
happening through
.
Let’s consider a single example of a way you behave that is learned,
which might surprise you. All people gesture when they speak. We use our
hands in fluid or choppy motions—to point things out, or to pantomime
actions in stories. Consider how you might throw your hands up and
exclaim, “I have no idea!” or how you might motion to a friend that it’s
time to go. Even people who are born blind use hand gestures when they
speak, so to some degree this is a universal behavior, meaning
all people naturally do it. However, social researchers have discovered
that culture influences how a person gestures. Italians, for example,
live in a society full of gestures. In fact, they use about 250 of them (Poggi, 2002)!
Some are easy to understand, such as a hand against the belly,
indicating hunger. Others, however, are more difficult. For example,
pinching the thumb and index finger together and drawing a line
backwards at face level means “perfect,” while knocking a fist on the
side of one’s head means “stubborn.”
Beyond observational learning, cultures also use
to
teach people what is important. For example, young people who are
interested in becoming Buddhist monks often have to endure rituals that
help them shed feelings of specialness or superiority—feelings that run
counter to Buddhist doctrine. To do this, they might be required to wash
their teacher’s feet, scrub toilets, or perform other menial tasks.
Similarly, many Jewish adolescents go through the process of bar and bat mitzvah.
This is a ceremonial reading from scripture that requires the study of
Hebrew and, when completed, signals that the youth is ready for full
participation in public worship.
Cultural Relativism
When social psychologists research culture, they try to avoid making value judgments. This is known as
and is considered an important approach to scientific objectivity. But,
while such objectivity is the goal, it is a difficult one to achieve.
With this in mind, anthropologists have tried to adopt a sense of
empathy for the cultures they study. This has led to
,
the principle of regarding and valuing the practices of a culture from
the point of view of that culture. It is a considerate and practical way
to avoid hasty judgments. Take for example, the common practice of
same-sex friends in India walking in public while holding hands: this is
a common behavior and a sign of connectedness between two people. In
England, by contrast, holding hands is largely limited to romantically
involved couples, and often suggests a sexual relationship. These are
simply two different ways of understanding the meaning of holding hands.
Someone who does not take a relativistic view might be tempted
to see their own understanding of this behavior as superior and,
perhaps, the foreign practice as being immoral.
Despite the fact that cultural relativism promotes the appreciation
for cultural differences, it can also be problematic. At its most
extreme it leaves no room for criticism of other cultures, even if
certain cultural practices are horrific or harmful. Many practices have
drawn criticism over the years. In Madagascar, for example, the famahidana funeral
tradition includes bringing bodies out from tombs once every seven
years, wrapping them in cloth, and dancing with them. Some people view
this practice as disrespectful to the body of a deceased person. Another
example can be seen in the historical Indian practice of sati—the
burning to death of widows on their deceased husband’s funeral pyre.
This practice was outlawed by the British when they colonized India.
Today, a debate rages about the ritual cutting of genitals of children
in several Middle Eastern and African cultures. To a lesser extent, this
same debate arises around the circumcision of baby boys in Western
hospitals. When considering harmful cultural traditions, it can be
patronizing to the point of racism to use cultural relativism as an
excuse for avoiding debate. To assume that people from other cultures
are neither mature enough nor responsible enough to consider criticism
from the outside is demeaning.
Figure 2.8 Friendship Day
In some cultures, it’s perfectly normal for same-sex friends to hold
hands while in others, handholding is restricted to romantically
involved individuals only. Friendship Day. By: Subharnab Majumdar
Source: FlickrCC BY-2.0
Positive cultural relativism is the belief that the world would be a
better place if everyone practiced some form of intercultural empathy
and respect. This approach offers a potentially important contribution
to theories of cultural progress: to better understand human behavior,
people should avoid adopting extreme views that block discussions about
the basic morality or usefulness of cultural practices.
Conclusion
We live in a unique moment in history. We are experiencing the rise
of a global culture in which people are connected and able to exchange
ideas and information better than ever before. International travel and
business are on the rise. Instantaneous communication and social media
are creating networks of contacts who would never otherwise have had a
chance to connect. Education is expanding, music and films cross
national borders, and state-of-the-art technology affects us all. In
this world, an understanding of what culture is and how it happens, can
set the foundation for acceptance of differences and respectful
disagreements. The science of social psychology—along with the other
culture-focused sciences, such as anthropology and sociology—can help
produce insights into cultural processes. These insights, in turn, can
be used to increase the quality of intercultural dialogue, to preserve
cultural traditions, and to promote self-awareness.
Reflection: Think about the following:
What is culture and what does
the word culture mean to you? Often times when we hear the word culture,
we tend to connect it with our race or our ethnic identity. While this
is a piece of our culture, it is not a comprehensive view. Culture
encompasses gender, age, religion, sexuality, social norms, family,
tradition, etc. There are a multitude of factors that create your
personal culture or what makes you who you are as an individual. What
aspects of your culture are most salient to you? How has your family,
upbringing, experiences shaped you into the person you are today? Do you
find that you prefer and value individual goals or think about group or
family goals when making a decision? For example, if you have chosen
your college major already, what factors did you consider when making
this decision? Did you make this decision independently or with
influence from family and friends, or both?
Activity: “I am”
Think about the following and be prepared to discuss your responses in class:
How does social media influence social comparisons?
Does it tend to influence more upward or downward comparisons?
Be prepared to support your answer.
Social Norms
Social Norm Violations
In the chapter, we discussed how our culture can influence the
development of social norms, or expected ways of behaving in certain
situations. For example, the expected norm for a classroom may include
raising your hand when you have a question and not talking during a
lecture. A norm when waiting in a line is to face the front of the line.
But what happens when someone violates a social norm? Have you
personally ever violated a social norm? If so, what happened? How did
you feel? How did others around you respond? These brief videos below
illustrate a few social norm violations. How do you think you would
respond if you saw someone violating these social norms?
Video Player
00:00
00:13
Self and Identity Resources
Resource 1
McAdams, D. P. (2020). Self and identity. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from Self and identity
Outside Resources
Web: The website for the Foley Center for the Study of Lives, at
Northwestern University. The site contains research materials, interview
protocols, and coding manuals for conducting studies of narrative
identity.
Adler, J. M. (2012). Living into
the story: Agency and coherence in a longitudinal study of narrative
identity development and mental health over the course of psychotherapy.
Journal of Personality and Social Psychology, 102, 367–389.
Bandura, A. (1989). Human agency in social-cognitive theory. American Psychologist, 44, 1175–1184.
Darwin, C. (1872/1965). The expression of emotions in man and animals. Chicago, IL: University of Chicago Press.
Deci, E. L., & Ryan, R. M.
(1991). A motivational approach to self: Integration in personality. In
R. Dienstbier & R. M. Ryan (Eds.), Nebraska symposium on motivation (Vol. 38, pp. 237–288). Lincoln, NE: University of Nebraska Press.
Erikson, E. H. (1963). Childhood and society (2nd ed.). New York, NY: Norton.
Erikson, E. H. (1958). Young man Luther. New York, NY: Norton.
Fivush, R. (2011). The development of autobiographical memory. In S. T. Fiske, D. L. Schacter, & S. E. Taylor (Eds.), Annual review of psychology (Vol. 62, pp. 559–582). Palo Alto, CA: Annual Reviews, Inc.
Freud, S. (1923/1961). The ego and the id. In J. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 19). London, UK:
Hogarth.
Freund, A. M., & Riediger,
M. (2006). Goals as building blocks of personality and development in
adulthood. In D. K. Mroczek & T. D. Little (Eds.), Handbook of personality development (pp. 353–372). Mahwah, NJ: Erlbaum.
Habermas, T., & Bluck, S. (2000). Getting a life: The emergence of the life story in adolescence. Psychological Bulletin, 126, 748–769.
Habermas, T., & de
Silveira, C. (2008). The development of global coherence in life
narrative across adolescence: Temporal, causal, and thematic aspects. Developmental Psychology, 44, 707–721.
Hammack, P. L. (2008). Narrative and the cultural psychology of identity. Personality and Social Psychology Review, 12, 222–247.
Harter, S. (2006). The self. In N. Eisenberg (Ed.) & W. Damon & R. M. Lerner (Series Eds.), Handbook of child psychology: Vol. 3. Social, emotional, and personality development (pp. 505–570). New York, NY: Wiley.
Hogan, R. (1982). A socioanalytic theory of personality. In M. Paige (Ed.), Nebraska symposium on motivation (Vol. 29, pp. 55–89). Lincoln, NE: University of Nebraska Press.
James, W. (1892/1963). Psychology. Greenwich, CT: Fawcett.
Josselson, R. (1996). Revising herself: The story of women’s identity from college to midlife. New York, NY: Oxford University Press.
Kleinfeld, J. (2012). The frontier romance: Environment, culture, and Alaska identity. Fairbanks, AK: University of Alaska Press.
Kroger, J., & Marcia, J.
E. (2011). The identity statuses: Origins, meanings, and
interpretations. In S. J. Schwartz, K. Luyckx, & V. L. Vignoles
(Eds.), Handbook of identity theory and research (pp. 31–53). New York, NY: Springer.
Lewis, M., & Brooks-Gunn, J. (1979). Social cognition and the acquisition of self. New York, NY: Plenum.
Markus, H., & Nurius, P. (1986). Possible selves. American Psychologist, 41, 954–969.