The very nature of elites makes them difficult for social researchers to study. This volume provides valuable insights into how researchers can successfully gain access to elite settings. Using their actual experiences, the contributors provide constructive advice as well as cautionary tales about how they learned to manoeuvre and become accepted in worlds otherwise closed to them. Three broad research areas are covered: business elites; professional elites; and community and political elites. Useful information is given on how researchers in these areas can gather data, construct interview strategies, write about their subjects and come to experience the research process.
Interviews were once thought to be the pipeline through which information was transmitted from a passive subject to an omniscient researcher. However the new `active interview' considers interviewers and interviewees as equal partners in constructing meaning around an interview. This interpretation changes a range of elements in the interview process - from the way of conceiving a sample to the ways in which the interview may be conducted and the results analyzed. In this guide, the authors outline the differences between active interviews and traditional interviews and give novice researchers clear guidelines on conducting a successful interview.
This text aims to help researchers and students to understand the purpose and presentation of multivariate statistical techniques. The most commonly used techniques are described in detail, such as multiple regression and correlation and path analysis
Written by qualitative researchers for qualitative researchers, and not presuming extensive computer experience, this user-friendly guide takes a critical look at the wide range of software currently available. The book gives detailed reviews of 24 programs in five major categories: text retrievers, textbase managers, code-and-retrieve programs, code-based theory-builders and conceptual network-builders. In addition, the book provides ratings of over 75 features per program. The authors also offer detailed guidance on the operation of each program, helping the reader to ask key questions about the use of the computer - the nature of the project being undertaken, what time-line analyses are planned and what worksheets are required - all to assist in identifying the programs best suited to the reader's needs.
Panel data - information gathered from the same individuals or units at several different points in time - are commonly used in the social sciences to test theories of individual and social change. This book highlights the developments in this technique in a range of disciplines and analytic traditions. Providing an overview of models appropriate for the analysis of panel data, the book focuses specifically on the area where panels offer major advantages over cross-sectional research designs: the analysis of causal interrelationships among variables. Finkel demonstrates how panel data offer multiple ways of strengthening the causal inference process. He also explores how to estimate models that contain a variety of lag specifications, reciprocal effects and imperfectly measured variables.
This accessible introduction to data analysis focuses on the interpretation of statistical results, in particular those which come from nonexperimental social research. It will provide social science researchers with the tools necessary to select and evaluate statistical tests appropriate for their research question. Using a consistent data-set throughout the book to illustrate the various analytic techniques, Michael Lewis-Beck covers topics such as: univariate statistics; measures of association; the statistical significance of the relationship between two variables; simple regression in which the dependent variable is influenced by a single independent variable; and multiple regression.
How can social scientists assess the reliability of the measures derived from tests and questionnaires? Through an illustrative review of the principles of classical reliability theory, Ross E Traub explores some general strategies for improving measurement procedures. Beginning with a presentation of random variables and the expected value of a random variable, the book covers such topics as: the definition of reliability as a coefficient and possible uses of a coefficient; the notion of parallel tests so as to make possible the estimation of a reliability coefficient for a set of measurements; what to do when parallel tests are not available; what factors affect the reliability coefficient; and how to estimate the standard error of measurement. Aimed at giving readers a nontechnical treatment of classical reliability theory, the book also includes end of chapter exercises as well as boxes that give more in-depth coverage of major topics or that provide algebraic proofs.
In this volume, Clark Moustakas clearly discusses the theoretical underpinnings of phenomenology, based on the work of Husserl and others, and takes the reader step-by-step through the process of conducting a phenomenological study.
Which time series test should researchers choose to best describe the interactions among a set of time series variables? Providing guidelines for identifying the appropriate multivariate time series model to use, this book explores the nature and application of these increasingly complex tests. In addition, it covers such topics as: joint stationarity; testing for cointegration; testing for causality; and model order and forecast accuracy. Related models explained include transfer function, vector autoregression and error correction models.