at the Annual Meeting of the Florida Chapter of the American Statistical Association
Bayesian Methods for Casecontrol Studies
Malay Ghosh, University of Florida
Casecontrol studies are useful for studying the risk factors associated with a rare disease, for example, cancer. The usual approach is to begin by identifying a number of known diseased individuals, referred to as "cases''. One is typically interested in measuring the association of the disease with a specified risk factor, for example smoking or dietary habits in the case of cancer. A typical approach is to take a cohort, and match the "cases'' within this cohort with a sample of "controls'' from this cohort who seem to be unaffected with the disease. The controls are chosen to match the cases usually in terms of age, sex and other auxiliary characteristics. The statistical problem is then to compare the rates of the risk factors of the cases with those in the controls. If the risk factor appears at a significantly higher rate in the cases than for the controls, then one is led to believe that the risk factor is positively associated with the disease. In the first part of my talk, I will discuss a brief history and some of the frequentist methods that are currently in use. The second part of my talk will discuss some of the Bayesian methods that I have just started working on.
Computer Vision as Statistics?
Ulf Grenander, Brown University
It will be argued that Statistics is an indispensable tool for the pattern theoretic approach to Computer Vision. The patterns are described by knowledge representation in algebraic form, and the data are analyzed by Bayesian inference. The algorithms are organized as stochastic jump/diffusions ( MCMC ) and realized by solving the corresponding Langevin Equations. In Object recognition a fundamental difficulty is how to understand and represent clutter. A patterntheoretic representation of a clutter type will be discussed and treated analytically with reference to empirically obtained images.
Contract Warranties and Equilibrium Probabilities
Nozer Singpurwalla, George Washington University
In this talk we describe an actual situation involving involving a litigation for a product under warranty. Warranty contracts are crafted by lawyers without consideration of the underlying failure probabilities. We propose to describe the various types of probabilities that are inherent to a warranty contract.
Nonparametric Control Charts
Raid W. Amin, University of West Florida
The proposed procedures are based on sign test statistics or on score functions computed for each sample, and are used in Shewhart and CUSUM control charts. When the process is in control, the run length distributions for the proposed nonparametric control charts do not depend on the distribution of the observations. Comparison with the corresponding parametric control charts are presented. Curtailed sampling plans are utilized with nonparametric charts for variability. A new CUSUM chart is being proposed for curtailed sampling plans. It is shown that curtailed sampling plans can considerably reduce the expected number of observations used in Shewhart and CUSUM sign charts. Robustness study: Quantiles can be better estimated than control limits for Ssquares.
Study of the Reliability of Bridgestructure System in a Random Environment
Subhash Bagui, University of West Florida
In this article, we study the system reliability function of the bridge structure network under a random environment. We compare this reliability function with that obtained under an ideal (laboratory) environment. Finally, we establish a monotonicity result for the system reliability with respect to multiple correlation coefficient.
A Monte Carlo Study of the Analysis of Means Version of Levene's Test for Odd Sample Sizes
Candace V. Coulter \Lambda and Peter S. Wludyka, University of North Florida
Levene's test is a well known robust test for testing the ksample homogeneity of variances hypothesis for samples from populations which cannot safely be assumed to be normal. In the most robust version of Levene's test the ANOVAF test is applied to the absolute deviations from the median (ADM). The Analysis of Means (ANOM), which has the same assumptions as ANOVA and approximately the same power, can be used to perform Levene's test. In the ANOM version of the test a decision chart that resembles a Shewart control chart can be sued to assess both statistical and practical significance. For odd sample sized each sample will have one ADM of zero. In this paper the type one error rate and power consequences of deleting the zero ADM's will be investigated using Monte Carlo. In addition, comparisons between the ANOVAF and ANOM versions of Levene's test will be made.
An Empirical Comparison of Record Linkage Procedures
Shanti Gomatam, University of South Florida
Much work on probabilistic methods of linkage can be found in the statistical literature. However, although many groups undoubtedly still use deterministic procedures, not much literature is available on these strategies. Furthermore there appears to exist no documentation on the comparison of results for the two strategies. Such a comparison is pertinent in the situation where we have only nonunique identifiers, like names, sex, race etc., as common identifiers on which the databases are to be linked. In this work we compare a stepwise deterministic linkage strategy with a probabilistic strategy, as implemented in AUTOMATCH, for such a situation. The comparison was carried out on a linkage between medical records from the Regional Perinatal Intensive Care Centers database and educational records from the Florida Department of Education. Social security numbers, available in both databases, were used to decide the true status of the record pair after matching. Match rates and error rates for the two strategies are compared and a discussion of their similarities and differences, strengths and weaknesses is presented.
It Does Take A Rocket Scientist: Cluster Analysis Implications for a Privatized, Global Space Industry
Mark Soskin* and Warren McHone, University of Central Florida
This paper investigates locational agglomeration patterns in the U. S. space industry. Cost pressures to changing technologies to recoverable launch vehicles, an end to decades of NASA control of the U. S. space agenda, and global competition for the lucrative communication satellite market has unleashed aggressive competition among states to capture launch sites for the next generation of spaceships. Likely impacts on the current spatial distribution of the space industry from new launch vehicle technologies, privatized industry funding, and global competition are explored. This paper addresses the major public policy and locational issues that will determine the outcome of this struggle. Using research sponsored by a major Enterprise Florida grant, we analyze primary survey data of NASA contractors and space industry firms nationally and a cluster analysis of spatial patterns in pooled industry data from industry and publicly available data sources.
Papers for Student Paper Competition
League Table: A Study of the Competition to Underwrite Floating Rate Debt
James S Ang and Shaojun Zhang*, Florida State University
We present an empirical analysis of the competition for market shares among underwriters. We develop an empirical model of gross spread, the amount of money a corporation issuer pays the underwriter for underwriting service. The market pricing for underwriting service is rationally determined. Gross spread charged by underwriters is a function of cost of production and distribution expressed as the characteristics of the issue, those of the issuers, underwriters' organizational assets, and the number of competitors. Fitted gross spread of any debt issue by this model is interpreted as the normal gross spread the market demands for that issue. An examination of an underwriter's market share in the league table reveals that it could be affected by the underwriter's strategic pricing, that is, by charging a discount or premium from normal gross spread.
Statistical Estimation of Locations of Lightning Events
Aicha Elhor, University of Central Florida
The problem of detection of locations of lightning events on the basis of groundbased measurements has been studied extensively within the last three decades. The location of a lightning event is derived from the times of arrival of electromagnetic radiation at several locations. The differences in the times of arrivals are converted into differences in distances from the point of origin (x, y, z) of the radiation to (m+1) receiving sites located in xy plane at (ai ,bi , 0), i = 0, … , m. The objective of the talk is to give overview of the statistical and deterministic techniques used for solution of this problem. Also, the new treatment of the problem will be presented which is based on application of Bayesian estimation with various noninformative priors.
Nonparametric Inference for the Proportionality Function in the Random Censorship Model
Glen Laird * , Myles Hollander and KaiSheng Song, Florida State University
We estimate a new function called the proportionality function, which can be used to assess if a proportional hazards model holds. A nonparametric estimator of the and asymptotic properties of the estimator are ascertained. A confidence band for the proportionality function using the bootstrap is derived. We also develop a bootstrap test of the hypothesis that the function is constant over time.
Empirical Bayes Estimators for BorelTanner Distribution
George Yanev, University of South Florida
The BorelTanner probability distribution was derived by Borel (1942) and Tanner (1953) to characterize the number of customers served in a queuing system with Poisson input and constant service time. Later it was applied in some models for random trees and branching processes. In the latter case one of the parameters can be interpreted as the offspring mean in a Galton Watson process with Poisson reproduction law. We propose empirical Bayes estimators for this parameter.