Posts Tagged ‘confounding results’
Many types of Brain-Math masses, and located differently
This is an intuition that can be experimented rather straightforwardly, sort of observing the areas in the brain that fire up when resolving a math problem. My hypothesis is that saying: “I’m a nullity in Math” is far from the truth.
There are different kinds of math and doing math.
You may feel null in one kind of math and be brilliant in another kind.
What is important is that we expose kids and students to various ways of thinking mathematically so that you don’t dump All math in the waste bin, believing that math thinking is out of reach to your brain.
I’ll discuss just two kinds of math processes. The algorithmic kinds and the abstract kinds of groups, spaces…
1) The algorithmic type of math is what is involved in basic arithmetic and manipulating numbers, like multiplication, divisions… These rules were discovered initially by trial and error and the procedures have led to resolving practical daily life problems before taking a higher dimensions. Theorems are actually short-cuts to bypassing lengthy phases in the procedures.
Various cultures have devised different kinds of algorithms for these purposes. For example, the Chinese have a different way of memorizing numbers, thinking and doing basic arithmetic
Actually, coding is mainly within the algorithmic type of following the stages and conditions for resolving a difficulty. Sub-programs are equivalent to theorems that would short-cut exhaustive procedures.
2) The abstract type in math involve setting rules, axioms, conditions and limitations for thinking out a problem. Whether having any application or Not. Differential, partial differentials and integrals were developed to account for the physical experimental data and acquired a life of their own in other disciplines
An experiment can consider two groups of kids who are familiar with the two kinds of thinking math: algorithmic and abstract.
One group will be handed a set of algorithmic exercises as the experimenter would prompt this group that they are solving abstract math. And vice versa for the second group.
I can predict that the prompting or pre-emptive guidance or advice of the researcher will slow down resolving the problems, kind of lengthening the normal or average duration.
People have confidence in the role of accredited Authority, and the kids will activate first the region in the brain that the researcher guided them toward it.
Usually, routine solving can take over and I suspect the first group will recover from erroneous prompting faster than the second group.
A major factor to seriously control in these experiments is the kinds of math the kids were exposed and trained at home before attending school and learning variants of thinking and doing math.
Controlling the teaching methods and credibility of the teacher is another important factor to control in order Not generate confounding results
Placebo is neutral and inexpensive? Think again!
Placebo are supposed to have neutral effects in double-blind experiments on the effects of medicines, and they are thought to be very inexpensive products.
First, do you know that 80% of published peer-reviewed clinical research failed to describe the contents in ingredients and the components of placebo used in the experiments? Fact is, placebo are not mere sugar, plain water, saline solutions… They do have ingredients “considered to be safe or neutral” by the researcher. Remember the case of olive oil used as placebo while cod oil was the medicine of cure? They both lowered cholesterol level!
Second, placebo are not cheap! Placebo are usually more expensive than the actual manufactured medicine to test. The placebo has to exactly resemble the medicine in form, shape, color, consistency, taste, credible in the logo and inscriptions… The pharmaceutical manufacturer has to redesign a new product for small quantities: Thus, placebo are far more expensive than normally budgeted in the research grant.
Actually, a new field is emerging for graphic designers called “placebo designers” with objective of finding creative and credible placebo.
Third, autosuggestion that placebo is the proper medicine has demonstrated to be a potent factor in the cure of many patients. For example, in 17% of the cases when patients were informed to be taking placebo, it had a positive influence. Fabrizio Benedetti used saline solution on Parkinson patients. The activities in the corresponding cerebral region diminished significantly: The trembling ceased.
Do you know that between 2001 and 2006, the number of “faked medicines” on the market that didn’t reach phase 2 in the testing (limited number of patients experimented on) increased 20%? That marketed faked medicines that didn’t pass phase 3 increased 11%?
Do you know that the Canadian Journal of Psychiatry revealed in April 2011 that 20% of medical practitioners administered placebo on their patients without the knowledge of the patients? That 35% of the prescribed medicines had low weak doses of potent ingredients?
Using placebo in chronic patients, familiar with the taste and consistency of the real medicine, generate negative counter-reactions in the mind of patients and called “nocebo”: The chronic patients are no fools and can discriminate a placebo from normal medicines; they are used to taking regularly particular medicines.
The role of autosuggest is very important in curing patients. Consequently, unless the clinical experimenter is thoroughly aware of the types of illnesses that can be cured by autosuggest, if he fails to factor-in this variable in controlling the experiment, the results would be confounding: Further investigations, analysis, or redoing the experiment with a reviewed design would be required.
Note: Idea extracted from an article in the “Courrier International”
“Fundamentals of controlled experimentation methods” (Article 39, April 1st, 2006)
An experiment is designed to study the behavior of the values/responses of a dependent variable (for example data collected) as the values/stimuli of an independent variable/factor are changed, manipulated, or presented randomly or in fixed manner.
Besides the independent variables, there are other factors that need to be controlled because they could have serious effects on the behavior of the selected dependent variable, and if the researcher fails to hold these factors constant or fixed by appropriate techniques, procedures, instructions, experimental setting, and environmental conditions, the study will most likely have confounding results.
Controlled experimentation methods are versions of current simulations methods, but are essentially more structured and physically controlled. In a nut shell, an experimental method is a series of controlled observations undertaken in an artificial situation with the deliberate manipulation of variables in order to answer specific hypotheses.
In general, a scientist plans, controls and describes all the circumstances surrounding his tests in a way they can be repeated by anyone else, a condition that offer dependability for validation.
The requisite of repeatability encourages artificial settings that can be controlled, especially because:
1. The participants/subjects in the experiment are not usually involved or engrossed in their tasks,
2. and it enables a scientist to try combinations of conditions that have not yet occurred.
Controlled experiments are time-consuming, expensive, and require a staff of skilled researchers and investigators so that they are conducted for basic research, publishing scientific papers, and when sponsored by deep pocket private companies and well-funded public institutions.
There are different types of experiments, some are designed to extract cause and effects among the variables and, especially their interactions in the performance of a system, and others are not so well structured and are intended to explore a phenomenon at an initial phase in order to comprehend the subject matter…
Experiments varies in their design purposes and levels of control: there are experiments on inanimate objects, natural phenomena that follow fixed trends and do not change much with time, and experiments using human subjects to select the better performing system or product, and experiments intended to study the cognitive concepts of people such as attitudes, mental abilities, problem solving aptitudes, attention span and the like.
The next article entitled “Controlled experimentation: natural sciences versus people’s behavior sciences” is intended to compare the complexity, differences, and levels of difficulties among the various experiments.
This article is striving to establish the fundamental processes or necessary structured steps to conducting a controlled experiment. In the spectrum of complexity, innovation and difficulty, the experiments in natural sciences are the easiest, and psychology experiments the hardest. Within the human-targeted research, fall experiments in the disciplines of agriculture, econometrics, education, social sciences, and marketing.
Early researchers in the phenomenon of electricity had to experiment with simple methods of one dependent and one independent variable, rudimentary equipments, and to rely on an exploratory knowledge of how electricity works and what are the factors that cause definite change in the behavior of certain criteria.
For example, scientists observed that there are relationships among voltage/power, the intensity of the current and the material the current is flowing through, then a scientist set up an experiment to study how the voltage changes when the intensity of the current varies or when the resistance of a material changes. By conducting several experiments, first by working with a specific conducting material, thus fixing the resistance, and varying the intensity of the current and repeating this simple experiment many times and, second by fixing the current at a certain level and working with different kinds of conducting materials, then the scientist managed to observe a steady mathematical relationship among these three variables. As the body of knowledge in electricity expanded and more experiments were undertaken, the physical science of electricity discovered many more factors that entered into the mathematical relationship with varying degrees of importance and consequences.
Obviously, physical scientists can now enjoy more powerful, time-saving, and effective experimental designs that can employ several independent variables and several dependent variables in the same experiment, thanks to the development in statistical/mathematical modeling and the number crunching computers.
These developments in controlled experimentations allow observations of the interactions among the various variables simultaneously, if physical scientists deign to apply them!
Controlled experimentation methods have a set of requisite structured steps that are common to both natural and social studies. Usually, an investigator has to review the research papers on the topic to be investigated, sort out the articles that are scientifically valid and experimentally sound, consider the variables that have been satisfactorily examined and those that were controlled, or not even considered…
Or the researcher may explore the topic by systematic observation of the problem, then he has to propose a hypothesis that could be rejected but never accepted no matter how often it was not rejected, then has to conceive a design for the experiment such as the types, numbers of variables, their levels, and how to manipulate the trials, then he has to decide on the best method for selecting the subjects, the materials, or products to be tested, the setting conditions, the procedures, the operations or tasks to be performed, the instructions, the equipments, the appropriate statistical model, then conducting the experiment, running the data, analyzing the results, interpreting the results, and finally providing guidelines or practical suggestions to be applied in engineering projects.
The motors of statistical packages used to analyze data are mathematical models or sets of algebraic equations with as many equations as unknown variables and relying on the two main statistical concepts of means and variances among data.
The purpose of controlled experimentation methods is to strictly control systematic errors due to biases and then to sort out the errors that are due to differences among the independent variables and those introduced randomly by human variability. Once the size of random errors is accounted for then it is possible to study the relationships among the independent variables and to claim that a hypothesis could or could not be rejected at a criterion level of statistical significance, set frequently at 5%. This criterion level of 5% of statistical significance means that there is still a 5% chance that an amount of random error might be the cause in the differences of the results.
Types of errors and mistakes committed in controlled experimentations will be reviewed in article #45. However, it is important to differentiate between evaluation/testing methods and strictly controlled experimentation.
In human factors discipline, evaluation methods are applied to compare the effectiveness of several products or systems by measuring end-users behaviors, like/dislike, acceptance/rejection, or satisfying rules and regulations with the purpose that management would be able to decide on the choice among the products offered within specifications.
Controlled experimental methods are mainly applied to study the cause and effects of the main factors on objective measurements that represents valid behaviors of representative samples of end-users with the purpose of reaching design guidelines for products or systems planned for productions.