4.d) Social and technical issues
4.d.1) Shortage of statistical studies with adequate IQ data
The studies of intelligence quotients in families with a large sample are relatively scarce, because the fieldwork that includes these type of variables tend to be more directed at studies of identical twins, twin brothers or adoption programs, and do not include the two progenitors.
Anyway, if there is another one! I would like have access to its row data.
They are also quite costly if you want to the results to have a guarantee of objectivity. A good sample design is needed and, there is the additional difficulty of obtaining volunteers for this type of statistical study, the performance of the intelligence test by specialized personnel, etc.
4.d.2) Access to quantitative data source
Nevertheless, the most complicated thing for me was being able to find and access the original quantitative data source to perform my own statistical research and estimations.
In spite of the small magnitude of the analyzed sample, the generation of variables by different groupings and criteria of the available values has allowed for the accessibility of a model that is very sensitive to the information. This characteristic, in my opinion, is one of the model's strong points; in spite of the vulnerability of the researched correlations, it is significant that some determination coefficients have been obtained that border close to the unit and that the thousands of checks curried out have a high level of consistency.
In defense of my small sample of quantitative data, I would like to say that I have been virtually travelling for more than one year to numerous worlds of professionals, national and international organizations dedicated to the study of intelligence, public organisms, universities, Internet news groups, international studies on twins, etc., asking for statistical data on intelligence quotients (IQ) of families. I even turned over the search to a psychometric company, but there were no results.
In the end, a search performed by the Google server's paid service found four different sites for me. I had entered three of them and not found the information. Fortunately the fourth bore its fruits, although with a certain amount of difficulty. At least I obtained a sample of quantitative data, even though it was small.
I suppose that the personal character of the statistical data and its social and political implications hinders its access.
Likewise, I imagine that the authors of the majority of the thousands of statistical research articles published on this subject probably did not have access to the quantitative data source and limited themselves to commenting on the results published by other works and theoretical justifications of their personal point of views.
4.d.3) Modern computer technology
In order to carry out statistical research with sensitivity analysis of multiple correlation coefficients obtained by linear regression, a great deal of knowledge of statistical techniques is required.
The analysis of the correlations of variables and the models of regression by ordinary least squares have been easily made for quite some time.
Nonetheless, the computer capacity of calculation has spectacularly multiplied in recent years. The use of this great capacity of modern computer technology has been totally necessary to obtain the results achieved in the statistical research of the EDI Study.
For this purpose, it is worth pointing out that each time the quantitative data is brought up to date, the Excel math worksheet generates more than 10,000 random numbers, hundreds of variables, more than 100,000 coefficients of determination of linear regression for ordinary least squares of the different variants of the model, and presents me a 200 graphs with 16,000 values, that are, of course, in color.
In total, more than 500 million correlation coefficients have been analyzed regarding the data from experimental research.
It used to take 3 to 7 seconds. It is worth mentioning that errors in formulas, which always occur, could not be detected unless one has an intuitive idea of what the result should be, thus allowing the uncovering of any errors; bearing in mind the enormous quantities of data involved.