Ministry of Public Health and Social Development of the Russian Federation
Stavropol State Medical Academy
Department of medical and biological physics
I. I. MARKOV, O. V. VECHER, E. I. KHRYNINA
PRACTICAL COURSE ON ELEMENTS OF MATHEMATICAL STATISTICS AND MEDICAL-BIOLOGICAL PHYSICS
Part II
Stavropol
2007
УДК 51; 53; 57.
ББК 22.11 я 73
П-69
Практический курс по МАТЕМАТИЧЕСКОЙ СТАТИСТИКЕ и медицинской и биологической физике. Часть II. Учебное пособие для студентов англоязычного отделения лечебного и стоматологического факультетов (на английском языке). Часть II. – Ставрополь: Изд-во СтГМА. – 2007. – 90 стр.
PRACTICAL COURSE ON ELEMENTS OF MATHEMATICAL STATISTICS AND MEDICAL-BIOLOGICAL PHYSICS. Textbook for students of General Medicine and Dentistry in the English-speaking Medium. Part II. – Stavropol: SSMA. – 2007. – 90 p.
Учебное пособие включает необходимый теоретический и практический материал по математической статистике, а также по медицинской и биологической физике.
The textbook includes all needed theoretical and practical material for the studying of mathematical statistics and medical-biological physics
Авторы:
– д. т.н., профессор, заведующий кафедрой медицинской и биологической физики Ставропольской государственной медицинской академии.
– к. ф.-м. н., старший преподаватель кафедры медицинской и биологической физики Ставропольской государственной медицинской академии.
– к. ф.-м. н., старший преподаватель кафедры медицинской и биологической физики Ставропольской государственной медицинской академии.
Рецензенты:
– доктор физико-математических наук, профессор Ставропольского государственного университета
Знаменская Стояна Васильевна – кандидат педагогических наук, доцент кафедры иностранных языков с курсом латинского языка Ставропольской государственной медицинской академии
– старший преподаватель кафедры иностранных языков с курсом латинского языка Ставропольской государственной медицинской академии
УДК 51; 53; 57.
ББК 22.11 я 73
П-69
Рекомендовано к изданию Советом Естественнонаучных дисциплин и Цикловой методической комиссией по англоязычному обучению студентов Ставропольской государственной медицинской академии.
© Ставропольская государственная
медицинская академия, 2007
Lesson 1
Elements of Combination Theory. The Binomial of Newton
1.1. Sets
The section of mathematics which studies the problems of a choice of elements from the set and accommodation of these elements in any order, named combination theory.
The sets, made of different objects which are difference among themselves, are called groups. If we have 10 various figures (0,1,2,3,4,5,6,7,8,9), we shall make groups of three figures in each group: 125, 521, 784. We shall receive various groups from these figures. Subjects (or object of any nature) of which groups are made, are called elements. For recording of various statements about sets and their elements the following symbols are accepted: the sets are usually marked by the big letters of the Latin alphabet (A, B, C …), and their elements – by small letters (a, b, c …). The word "belongs" is replaced by the symbol
, "does not belong" by the symbol
. The set which has not elements is called empty set and is marked by the symbol Ø. An example of empty set is the set of people who are elder than 300 years. Two sets are considered as equal if they consist of the same elements. We consider the set C as association of two sets A and B, if this sets consists of all elements, which belong to the sets A and B. Association of sets A and B is marked by A
B, where symbol
- is a sign of association of sets. For example, association of sets A = {1; 3; 4} and B = {0; 2} it’s a set A
B = {0; 1; 2; 3; 4}. It is possible to speak about association of three and a lot of sets, and, accordingly, about their crossing. At fig. 1.1, set A
B represents association of sets A and B (shaded area).
![]() |
Fig. 1.1 Fig. 1.3
Fig. 1.2 Fig. 1.4
Crossing of sets A and B is the set made of elements, belonging simultaneously to both sets (fig. 1.2). Crossing of sets A and B is marked by A∩B, where ∩ - is a sign of crossing of sets, for example: {1; 3; 4} ∩ {0; 2} = Ø, {1; 3; 4} ∩ {0; 1; 2; 3} = {1; 3}.
The Difference of two sets A and B is called the set which consist of all elements which belong to A and do not belong to B. The difference between A and B is marked by symbol A\B. For example, if A = {1; 2; 3; 4; 5} and B = {3; 4; 5; 6; 7; 8}, A\B = {1; 2}. Thus, A\B = A\ (A
B).
Subset B of given set A is the set, which consists of some elements of set A, i. e. the subset is a part of set. The subset B of set A is marked by symbol: B
A.
Evidently to represent sets and relations between them, it is necessary to draw geometrical figures which show these relations. For example, if we want to represent, that set A is a subset of set B we draw these sets as shown at fig. 1.3. If it is necessary to show, that subsets A and B have no common elements these sets are represented as it is shown at fig. 1.4. Such figures of sets are called Euler - Venn diagrams.
There are some kinds of groups of elements: accommodations, permutations, combinations.
1.2. Accommodations
Let's the set, consist of m elements and take from them the groups, consist of n elements. This groups will differ from each other or elements, or the order of elements, but number of subjects is the same. Such groups refer to as accommodations from m elements of n.
In common case, the number of all possible accommodations from m elements of n (n<m) is marked by
and may be calculated under following formula:
(1.1)
or
(1.1a)
It is possible to tell, that number of all possible accommodations from m elements on n is equal to product of n consecutive integers from which the greatest is m.
For example:
![]()
1.3. Permutations
If we take the n elements and changes the arrangement of their in a groups, then such groups we are called permutations from n elements. It is possible to tell, that permutations from m elements are accommodations from m of elements on m.
The number of every possible permutations from m is marked by Pn. Number of every possible permutations from m elements may be calculated from following formula:

(the product of numbers
is designate
and named as «m - a factorial»). The number n! is the number of ways which m objects can be ordered. By definition, ![]()
For example:
1)
;
2)
;
3)
.
binations
Let's the set, consist of m elements and take from them the groups, consist of n elements. These groups will differ from each other only elements (the number of subjects is the same). Such groups refer to as combinations from m elements of n.
In common case, the number of all possible combinations from m elements of n (n<m) is marked by
and may be calculated under following formula:
(1.3)
This is the most important of the combinatorial rules given in this chapter and is the only one we will use extensively. This rule is basic to the formula of the binomial distribution presented in the next chapter.
For example:
1) 
2) To solve the equation:
.
We are know, that
, if we are replaced this value in equation, then
;
.
1.5. The Binomial of Newton
We are known that:
![]()
![]()
![]()
![]()
The coefficients in these equations are taken from corresponding lines of Pascal’s triangle:
1
1 1
1 2 1
Let's write down expression:
,
The coefficients of the sum are determined by figures of one of lines of Pascal’s triangle.
In total case we can write:
(1.4)
Formula (1.4) is known as the formula of a binomial of Newton.
The main properties of decomposition of the degree of the binomial
1. The number of all members equally m + 1, i. e. on unit is more than degree of a binomial.
2. The degree of the letter a consistently decrease for unit, and degree of the letter b increase for unit. The sum of degrees parameters of letters a and b in each member is equal m.
3. The coefficients of members, equidistant from the beginning and the end of decomposition, are equal.
4. The sum of all coefficients of decomposition of a binomial is equal to 2m.
Exercises
№ 1. To calculate:
;
;
;
;
;
№ 2. To check up equality:
;
;
;
;
.
№ 3. To solve the equation:
;
;
;
;
.
№ 4. To find decomposition:
;
![]()
;
. ![]()
Lessons 2-3
Probability Theory
2.1. Event. Probability of an Event
The theory of probability is science, which studies the regularity in random phenomena. Random phenomenon is the phenomenon, which may be occur and may not occur. Event in probability theory is called as any fact, which as a result of experiment can occur, and can not occur.
Random events can be divided into the singles ones and multiple events.
For example, accidents, unexpected events and others represent single events as they are unique. Single events in probability theory and mathematical statistics are not considered.
Multiple events or the phenomenon are the subject of probability theory and mathematical statistics. In probability theory any events is considered as experience.
We call the dependent events the events when the probability of one event depends on probability of other events. In other case they are called independent.
We call the compatible events – the events when appearing one of them does not exclude the appearing of another. In case when one excludes the appearance of other events – they called incompatible.
The events are usually marked by the big letters of the Latin alphabet – A, B, C,…
Let us take some example of events:
- the occurrence of the heads if we toss a coin;
- buying of the lottery ticket;
- possibility getting excellent mark at the examination.
We may now define the probability of event:
Definition: The probability of event A is the relative size of A with respect to the size of the sample space, X.
This definition makes use of our extended equal-likelihood assumption and so many not be appropriate in cases where this assumption is not reasonable.
Denoting the size of set A by
and the size of sample X by
, the definition of probability may be written as a formula:
(1)
In cases where the number of elements in the sample is finite, the size of a set is just the number of elements in the set. In cases where there are infinitely many elements, we may define size as the length of an interval, area, or volume, depending on the situation. For finite sample spaces, the definition of probability, equation (1) may be simplified. Let
be the number of elements in set A and let
be the number of elements in sample space X. Then we have:
(2)
We will now demonstrate the use of this equation a few examples.
Example 1.
There are 3 yellow and 5 red balls in the box. One ball is taken out. Find the probability of that this ball is red.
Solution:
In our case
,
then using formula (2) we can write:

Example 2.
The dice is tossed. Find the probability that this number is 5.
Solution:
In this case
,
. Consequently,
.
2.2. Basic Rules for Probability
We have explored probability on a somewhat intuitive level and have seen rules that help us evaluate probabilities in special cases: when we have known sample space with equally likely basic outcomes (and the extension of this case to infinite sample spaces). We will now look at some general probability rules that hold regardless of the probability situation or kind of probability (objective or subjective). First, let us give a general definition of probability.
Definition: Probability is a measure of uncertainty. The probability of event A is a numerical of our belief in the occurrence of the event.
Probability obeys certain rules. The first rule sets the range of values that the probability measure may take.
Rule 1. For any event, the probability
satisfies:
(3)
When an event cannot occur, its probability is zero (impossible events). The probability of the empty set is zero:
.
Note the probability is a measure that goes from 0 to 1. In everyday conversation, we often describe in less formal terms. For example, people sometimes talk about odds. If the odds are 1 to 1, the probability is ½; if the order are 1 to 2, the probability is 1/3; and so on. Also, people sometimes say “The probability is 30 %”. We will avoid quantifications and will always deal with probability as a number between 0 and 1.00. Its interpretation should be clear.
![]() |
Our second rule for probability defines the probability of the complement of an event in terms of the probability of the original event. Recall that the complement of the set A is denoted by
. The events A and
are called the opposite events.
Rule 2.
. (4)
As a simple example, if the probability of rain tomorrow is 0.3, then the probability of no rain tomorrow must be
. If the probability of drawing an ace is
, then the probability of the drawn card not being an ace is
.
We now start a very important rule, the rule of unions. The rule of unions allows us to write the probability of the union of two events in terms of the probability of their intersection:
Rule 3.
. (5)
The probability of intersection of two events,
, is called their joint probability. The meaning of this rule is very simple and intuitive. When we add the probabilities of A and B, we are measuring the relative size of A within the sample space, and once when doing this with B.
Since the relative size, or probability, of the intersection of the two sets is counted twice, we subtract it once so that we are left with the true probability of the union of the two events.
The law of unions is especially useful when we do not have the sample space for the union of events but do have the separate probabilities. For example, suppose my chance of being offered a certain job is 0.4, my probability of getting another job is 0.5, and my probability of being offered both jobs (i. e., the intersection) is 0.3. Using the rule of unions, my probability of being offered at least of the two jobs (their union) is equal:
.
Exclusive events.
When the sets corresponding to two events are disjoint, that is, have no intersection, the two events, are called mutually exclusive. For mutually exclusive events, the probability of the intersection of the events is zero. This is so because the intersection of the events is the empty set, and we know from the discussion following rule 1 that the probability of the empty set
, is zero.
For mutually exclusive events A and B:
. (5a)
Thus fact gives us a special rule for union (rule 3) for mutually exclusive events. Since the probability of the intersection of the two events is zero, there is no need to subtract
when computing the probability of the union of the two events.
Therefore, for mutually exclusive events A and B: we can write Rule 4:
Rule 4.
. (6)
If the events happen to be mutually exclusive, we subtract zero as the probability of the intersection.
3.1. Conditional Probability
As a measure of uncertainty, probability depends on information. Thus, the probability you would give the event “IBM stock price will go up tomorrow” depends on what you know about the company and its performance; the probability is conditional upon your information set. If you know much about the company, you may assign a different probability to the event than you would in case you know little about the company. We may define the probability of event A conditional upon the occurrence of event B. In this example, event A may be the event that the stock will go up tomorrow, and be the event B may be a favorable quarterly report. The definition of conditional probability is as follows.
Rule 5. Conditional probability of event A given the occurrence of event B:
, assuming
. (7)
Rule 5 may be writing in form:
. (7a)
The vertical line in
is read given, or conditional upon. The probability of event A given the occurrence of event B is defined as the probability of the intersection of A and B, divided by the probability of event B.
Example 3.
There are 7 white and 5 black balls in a box. You take one ball and after another one. What equal probability that both balls are black?
Solution: occurrence of the first black ball (event B) has, evidently, probability
.
If the first ball appeared black, the conditional probability of event A (occurrence of the second black ball) is equal:
, because before extraction of the second ball remained 11, from them 4 black. The probability to take out two black balls is equal:
.
Two events A and B, are said to be independent of each other if and only if the following three conditions hold:
Rule 6.
Conditions for the independence of two events A and B:
(8)
(9)
(10)
Equations (8) and (10) are intuitively clear: equation (8) says that the probability of A given the occurrence of B is equal to the probability of A without the condition that B occurs. It tells us that the event B has on event A in the sense that it’s or nonoccurrence does not affect the chances of occurrence of event A. Event B has no information content that help us to predict event A. Equation (9) says the same thing about the probability of event B, with A as the conditioning event.
Equation (10) is the most useful of three definitions of independence. Equation (10) useful because it tells us that in the special case that the two events are independent, the probability of their intersection – their joint probability – is equal to the product of the two separate probabilities.
Example 4.
The nurse serves in chamber of four patients. The probability of that within one hour the first patient will demand attention of the nurse is equal
the second is equal
, the third
, the fourth
. Find probability of that within one hour all patients will demand to themselves attention of the nurse.
Solution:
Events A, B, C, and D are independent events. Using formula (10) we can write:
![]()
3.2. The Law of Total Probability and Bayes Theorem
In this section we present two useful results of probability theory. The first one, the law of total probability, allows us at time to evaluate probabilities of events that are difficult to obtain alone, but become easy to calculate once we condition on the occurrence of a related event. We first assume that the related event occurs, and than we assume it does not occur. The resulting conditional probabilities help us compute the total probability of occurrence of the event of interest.
The second rule, the Bayes’ theorem, is easily derived the law of total probability and the definition of conditional probability. The rule, discovered in 1761 by the English clergyman Rev. Thomas Bayes, has had a profound impact on the development of statistics and is responsible for the emergence of a new philosophy science. Bayes himself is said to have been unsure of his extraordinary result, which was presented to the Royal Society by a friend in 1763 – after Bayes’s death.
The law of total probability.
Consider two events A and B. Whatever may be the relation between the two events, we can always say that the probability of A is equal to the probability of the intersection of A and B, plus the probability of the intersection of A and the complement of B (event
).
Rule 7. The law of total probability:
(11)
The sets B and
form a partition of the sample space. A partition of a space is the division of the space into a set of events that are mutually exclusive (disjoint sets) and cover the whole space. Whatever event B may be, either B or
must occur, but not both. The law of total probability may be extended to mare complex situations, where the sample space X is partitioned into more than two events. Now we partition the space into a collection of n sets
. The law of total probability in this situation is given as equation (12).
, (12)
where there are n sets in the partition:
, ![]()
We demonstrate the rule with the more specific example.
Define A as the event that a picture card is drawn out of a deck of 52 cards (the picture cards are the aces, kings, queens, and jacks). Letting H, C, D and S denote the events that the card drawn is a heart, club, diamond, or spade, respectively, we find that the probability of a picture card is
, which is what we know the probability of a picture card to be counting 16 picture cards out of a total of 52 cards in the deck. Thus, we can see, that event A is the set-addition of the intersections of A with each of the four sets H, D, C, and S.
We now develop the well-known Bayes’ theorem. The theorem allows us to reverse the conditionality of events: we can obtain the probability of B given A from the probability of A given B (and other information).
The theorem gives the probability of one of the sets in the partition,
given the occurrence of event A. A similar expression holds for any of the events
.
Rule 8. Bayes’ theorem
(13)
Exercises
1. The dice is tossed. You must find the probability of even number.
2. Among 1000 of newborns there were 517 boys. Find the probability of birth of a girl.
3. Find the probability of occurring even number if the digital “1”, “2”, “3” are random placed.
4. There are 50 details in a box, 5 of them are painted. One detail is taken out. Find the probability of that this detail is not painted.
5. There are 30 balls in a box: 10 red, 5 blue, 15 white. One ball is taken out. Find the probability of occurrence: a) a blue ball; b) red ball; c) black ball; d) white ball.
6. The departments receives tests from three cities – N, M, K. Probability of receiving then from city N is equal 0.7, from city M is equal 0.2. Find the probability p of receiving the test from city K.
7. A probability of hitting in to the shooting point by first and second guns equals 0.7 and 0.8 accordingly. Find the probability of hitting to the shooting point by one of them.
8. There are 5 green, 3 red and 4 black pencils in the box. One pencil is taken out, and then another. Find the probability of that the first pencil is green, second one is black.
9. The doctor has 3 syringes for 5 ml and 7 syringes for 10 ml. Doctor takes one syringe, then another. Find the probability of that the first syringe is for 5 ml, and the second one is for 10 ml.
10. Find the probability of appearing of to heads during one throwing of two coins.
11. There are two sets of details. The probability of that the detail of one set is standard equals 0.8, and the second equals 0.9. Find the probability of that the detail taken at random is standard.
12. There are three baskets of apples. In first basket there are 10 red, 15 yellow and 5 green apples; in second basket: 15 red and 15 green apples; third basket is empty. Find the probability of that the apple taken at random is red (yellow, green).
Lessons 4-5
Random Variables, Their Basic Characteristics
4.1. Random Variables
In the probability theory and mathematical statistics we must know what is called random variable, because this is one of the most definitions of this chapter of mathematics.
The variable, which as a result of experience can accept various numerical values, is called random variables. There are two main types of random variables – discrete variables and continuous variables.
To set random variables, besides to set them uniquely in probability theory, the concept of function of distribution is entered. This function is marked
and presented by following formula
. (1)
|
Из за большого объема этот материал размещен на нескольких страницах:
1 2 3 4 5 6 7 8 |






