Compartir:

Programación del seminario: Año 2018

 

  • Viernes, 14 de setiembre de 2018, Hora: 11:30

KyungMann Kim, Department of Biostatistics & Medical Informatics, University of Wisconsin-Madison, USA

Complex Time to Event Data: Design and Statistical Inference for the INVESTED Trial

  • Viernes, 20 de julio de 2018. Hora: 12:30

Richard Arnold, Victoria University of Wellington, Wellington, New Zealand y Peter Jupp, University of St Andrews, Scotland

Statistics of Ambiguous Rotations

  • Miércoles, 11 de julio de 2018. Hora: 12:30

Hongzhe Li, Department of Biostatistics and Epidemiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.

Methods for High Dimensional Compositional Data Analysis in Microbiome Studies

  • Jueves, 5 de julio de 2018. Hora: 12:00

Thomas Jaki, Medical and Pharmaceutical Statistics Research Unit, Department of Mathematics and Statistics, Lancaster University, United Kingdom

Dose-escalation trials without monotonicity assumption: A weighted differential entropy approach

  • Viernes, 8 de junio de 2018. Hora: 12:30

Julie Rendlovà, Institute of Molecular and Translational Medicine, Palacký University, Olomouc, Czech Republic.

Bayesian counterpart to t-tests in compositional analysis of metabolomic data

  • Viernes 18 de mayo de 2018. Hora: 12:30

       Jue Lin Ye, Laboratorio de Ingeniería Marítima, Universitat Politècnica de Catalunya, Barcelona, Spain.

       Non-stationary multivariate characterization of wave-storms in coastal areas: copulas and teleconnection patterns

  • Viernes 27 de abril de 2018. Hora: 12:30

Irina Gribkovskaia,  Molde University Ciollege, Noruega

Decision support tools for supply vessel planning in offshore oil and gas logistics

  • Viernes, 9 de marzo de 2018. Hora: 12:30

Ivy Liu, School of Mathematics and Statistics, Victoria University of Wellington, Wellington, New Zealand.

Diagnostics for ordinal response models

 


Inicio

Construct, Merge, Solve & Adapt: Un algoritmo general para la optimización combinatoria
PONENTE: Christian Blum
IDIOMA: Inglés
LUGAR: Edificio C5, Aula C506

FECHA: Viernes, 14 de diciembre. Hora:12:00

RESUMEN: Construct, Merge, Solve & Adapt (CMSA) es un algoritmo híbrido reciente para la optimización combinatoria. Se basa en la resolución iterativa de instancias reducidas de problemas mediante un método exacto como, por ejemplo, un solver ILP (integer linear programming). En esta charla, en primer lugar, presentaremos CMSA en terminos generales. Luego, nos centraremos en dos desarrollos recientes. El primero se refiere al uso de aprendizaje para generar las instancias reducidas de problemas dentro del CMSA. El segundo se refiere al desarrollo de una versión de CMSA independiente del problema que se intenta resolver, dirigido a resolver ILPs binarios.

SOBRE EL AUTOR: Christian Blum es investigador científico en el Instituto de Investigación en Inteligencia Artificial del CSIC desde enero de 2017. Antes había sido profesor investigador en  Ikerbasque y profesor/investigador con distintas figuras en la UPC. Su investigación tiene dos vertientes: De un lado, trabaja en  swarm intelligence, la disiplina de la inteligencia artificial basada en la inspiración obtenida del comportamiento colectivo de, por ejemplo, insectos sociales, bandadas de pájaros, y bancos de peces. Por otro lado, también trabaja en la hibridación de metaheurísticas con herramientas más clásicas de la inteligencia artificial y la investigación operatica.
En lo que se refiere a las apliaciones de su investigación, destaca su carácter interdisciplinar. De hecho, los problemas de optimización y control surgen en muchas areas de apliacación como en telecomunicaciones, bio-informática, neurocienca y robótica.
Puede encontrarse información más detallada sobre el trabajo de  Christian Blum  en su web.

 


Inicio


Complex Time to Event Data: Design and Statistical Inference for the INVESTED Trial

PONENTE: KyungMann KimIDIOMA: InglésLUGAR: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
FECHA:
Viernes, 14 de setiembre de 2018. Hora: 11:30

RESUMEN: The INfluenza Vaccine to Effectively Stop cardioThoracic Events and Decompensated heart failure (INVESTED) trial (NCT02787044) was designed to determine which of two formulations of influenza vaccine, the standard dose or an investigational higher dose, is more effective in reducing deaths and heart- or lung-related admissions to hospital in patients with a recent history of hospitalization due to heart failure or myocardial infarction.  In this presentation, I will describe challenges in design of and statistical inference in randomized controlled trials with complex time to event data using the INVESTED trial as an example and opportunities for development of novel statistical methods to address the challenges. Specifically I will describe first how the INVESTED trial is supported, my role as principal investigator of its data coordinating center, and the original design of the trial and statistical analysis plan and then the change in design and statistical analysis plan which led to many challenging issues including the revised primary endpoint of the trial and statistical analysis plans involving correlated times to event data and non-randomized cohorts due to vaccinations over three influenza seasons based on a randomize-once design. I will also highlight statistical analysis issues in vaccine trials and mediation analysis. Slides of this seminar are available here.


SOBRE EL AUTOR: KyungMann Kim is Professor of Biostatistics and Medical Informatics at the University of Wisconsin-Madison. His research interest concern, among others: sequential methods of statistical analysis; clinical trials methodology; categorical data analysis; survival analysis; repeated measures analysis; applications in cancer research; applications in ophthalmology; interface between statistics and computer science. For more information on his research interest, see https://www.biostat.wisc.edu/content/kim-kyungmann


Inicio

Statistics of Ambiguous Rotations

PONENTE: Richard Arnold y Peter JuppIDIOMA: InglésLUGAR: Edifici C5, Aula D6004, Campus Nord, UPC (ver mapa)
FECHA:
Viernes, 20 de julio de 2018. Hora: 12:30


RESUMEN:
Orientations of objects in $\mathbb{R}^p$ with symmetry group $K$ cannot be described unambiguously by elements of the rotation group $SO(p)$, but correspond instead to elements of the quotient space $SO(p)/K$.  Specifications of probability distributions and appropriate statistical methods for such objects have been lacking -- with the notable exceptions of axial objects in the plane and in $\mathbb{R}^3$.

We exploit suitable embeddings of $SO(p)/K$ into spaces of symmetric tensors to provide a systematic and intuitively appealing approach to the statistical analysis of the orientations of such objects.  We firstly consider the case of {\em orthogonal $r$-frames} in $\mathbb{R}^p$, corresponding to sets of $r\leq p$ mutually orthogonal axes in $p$ dimensions, which includes the Watson and Bingham distributions.  Using the same approach we then treat the three dimensional case of $SO(3)/K$, where $K$ is one of the point symmetry groups.

In both cases the resulting tools include measures of location and dispersion, tests of uniformity, one-sample tests for a preferred orientation and two-sample tests for a difference in orientation.  The methods are illustrated using data from earthquake focal mechanisms (fault plane and slip vector orientations) and crystallographic data with orientation measurements of distinct mineral phases.


SOBRE EL AUTOR: I started out as an astronomer, studying the structure of galaxies and the orbits of stars within them. I moved into statistics in 1995 and have developed interests in a wide range of statistical applications, including some in my original subject of physics. In recent years I have research projects in Reliability Theory, Directional Statistics, Statistics in Geophysics, and Cluster Analysis. I have a developing interest in fisheries research. I have worked in institutes in Cambridge and London in the UK, Leiden in the Netherlands and Statistics New Zealand in Wellington. I started work at Victoria in 2001. I enjoy working on applied data analysis problems, especially where a novel approach is required to analyse the data correctly.

Research interests: In my research I often take a Bayesian statistical approach. My work in reliability focusses on the description of correlated failures in multiple component systems - such as cars and computers - where the failure of one component can induce ageing or failure of other components. In geophysics my interests are in the determination of the properties of earthquakes, and then using those characteristics to infer properties of the tectonic stresses in the earth's crust. This work in geophysics has led to research into the statistics of oriented objects: earthquakes, stress axes, and crystal orientations - in particular where there are high degrees of symmetry present. In such situations specialised methods of data analysis are needed. In fisheries I am interested in models of population size, and inference from the limited data that surveys and commercial catch reports provide. For more details on professor Arnold's research projects, visit his personal homepage at https://www.victoria.ac.nz/sms/about/staff/richard-arnold


Inicio

Methods for High Dimensional Compositional Data Analysis in Microbiome Studies

PONENTE: Hongzhe LiIDIOMA: InglésLUGAR: Edificio B6, Sala d'Actes Manuel Martí, Campus Nord, UPC (ver mapa)FECHA: Miércoles 11 de julio de 2018, Hora: 12:30


RESUMEN:
Human microbiome studies using high throughput DNA sequencing generate compositional data with the absolute abundances of microbes not recoverable from sequence data alone. In compositional data analysis, each sample consists of proportions of various organisms with a unit sum constraint. This simple feature can lead traditional statistical methods when naively applied to produce errant results and spurious associations. In addition, microbiome sequence data sets are typically high dimensional, with the number of taxa much greater than the number of samples. These important features require further development of methods for  analysis of high dimensional compositional data. This talk presents several latest developments in this area, including methods for estimating the compositions based on sparse count data, two-sample test for compositional vectors and regression analysis with compositional covariates. Several micobiome studies at Penn are used to illustrate these methods and several open questions will be discussed.

SOBRE EL AUTOR: Dr. Hongzhe Li is a Professor of Biostatistics and Statistics at the Perelman School of Medicine at the University of Pennsylvania (Penn). He is Vice Chair of Integrative Research in the Department of Biostatistics, Epidemiology and Informatics, Chair of the Graduate Program in Biostatistics and Director of Center of Statistics in Big Data at Penn. Dr. Li has been elected as a Fellow of the American Statistical Association (ASA), a Fellow of the Institute of Mathematical Statistics (IMS) and a Fellow of AAAS. Dr. Li severed on the Board of Scientific Counselors of the National Cancer Institute of NIH and regularly serves on various NIH study sections. He is currently an Associate Editor of Biometrics, Statistica Sinica and also co-Editor-in-Chief of Statistics in Biosciences. He served as Chair of the Section on Statistics in Genomics and Genetics of the ASA. Dr. Li’s research has been focused on developing powerful statistical and computational methods for analysis of large-scale genetic, genomics and metagenomics data and high dimensional statistics with applications in genomics.  He has published papers in Science, Nature, Nature Genetics, Science Translational Medicine, JASA, JRSS, Biometrika, etc.


Inicio

Dose-escalation trials without monotonicity assumption: A weighted differential entropy approach

PONENTE: Thomas JakiIDIOMA: InglésLUGAR: FME, Edifici U, Aula: PC1, Campus Sud, UPC, Pau Gargallo, 5, 02028 Barcelona (ver mapa).FECHA: Jueves 5 de julio de 2018, Hora: 12:00


RESUMEN:
 Methods for finding the highest dose that has an acceptable risk of toxicity in Phase I dose-escalation clinical trials assuming a monotonic dose-response relationship have been studied extensively in recent decades. The assumption of monotonicity is fundamental in these methods. As a result, such designs fail to identify the correct dose when the dose-response relationship is non-monotonic. Moreover, many of these approaches can not be straightforwardly extended to dose-escalation studies of drug combination because ordering the dose combinations is difficult. We propose a dose-escalation method that does not require monotonicity or any pre-specified relationship between dose levels. The method is motivated by the concept of the weighted differential entropy. More precisely we propose to use the difference of the weighted and standard differential entropy as a criterion for dose escalation. For a given weight function the asymptotically unbiased and consistent estimator of the proposed dose escalation criterion is found. For small sample size typical for dose-escalation studies it is shown in simulations that the proposed method is comparable to well-studied and used methods under the assumption of monotonicity and outperforms them when this assumption is violated. We subsequently extended the basic method to limit the risk of overdosing patients by imposing the time-varying safety constraint controlling the probability of overdosing. Again simulations are used to show good performance under many different scenarios.

SOBRE EL AUTOR: Thomas Jaki is Professor of Statistics at Lancaster University and director of the Medical and Pharmaceutical research unit (MPS, www.mps-research.com) which has a long lasting tradition in the design and analysis of clinical trials. He is an NIHR Senior Research Fellow, Coordinator of the EU funded IDEAS network (www.ideas-itn.eu) and a Co-Investigator of the MRC’s North-West Hub for Trials Methodology Research.

His methodological research to date has focused on adaptive designs and multiplicity, Bayesian methods and estimation with sparse data. He has worked on estimators for pharmacokinetic parameters, developed adaptive designs - in particular for multi-arm studies - and investigated Bayesian methods for dose-escalation.


Inicio

Bayesian counterpart to t-tests in compositional analysis of metabolomic data

PONENTE: Julie RendlovàIDIOMA: InglésLUGAR: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
FECHA:
Viernes 8 de Junio de 2018 Hora: 12:30


RESUMEN:
Currently, both targeted and untargeted metabolomic methods aim to find statistically significant differences in chemical fingerprints of patients with some disease and a control group and to identify biological markers allowing for the future prediction of the disease. After the raw measurements are pre-processed and interpreted as intensities of chromatographic peaks, the differences between controls and patients are evaluated by both univariate and multivariate statistical methods. The traditional univariate approach relies on t-tests or their nonparametric version (Wilcoxon test) and the results from multiple testing are still compared merely by p-values using a so-called volcano plot. This presentation attempts to introduce metabolomic data, pre-processing methods, and drawbacks, and to propose a Bayesian counterpart to the widespread univariate analysis, taking into account the compositional character of metabolomes. Since each metabolome is a collection of some small-molecule metabolites in a biological material, relative structure of metabolomic data is of the main interest, hence they should be treated within the framework of the logratio methodology. Therefore, a proper choice, construction, and interpretation of orthonormal coordinates will be discussed as well as fundamental principles of compositional data analysis. Theoretical background of the contribution is illustrated on the analysis of the data set containing dry blood spots of patients suffering from medium chain acyl-CoA dehydrogenase deficiency (MCADD) and a small simulation for evaluation of the stability of methods in case of loss of samples is added.

SOBRE EL AUTOR: Already during master's degree studies of applied mathematics, I have started to focus mainly on statistics which later led me to continue with my Ph.D. at Palacký University, Olomouc in the Czech Republic. My research topics are quite diverse but in all cases, I work on methods developed within compositional data framework. Last year, I joined a metabolomic group at Institute of Molecular and Translational Medicine where I work as a data analyst. These days, most issues and subjects from my job are nicely linked to my studies and I can enjoy the challenges that specifics of medical data analysis sometimes bring.


Inicio

Non-stationary multivariate characterization of wave-storms in coastal areas: copulas and teleconnection patterns

PONENTE:  Jue Lin YeIDIOMA: InglésLUGAR: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
FECHA:
Viernes 18 de mayo de 2018 Hora: 12:30


RESUMEN:
In the characterization of wave-storm events of the sea, engineers tend to make use of only the significant wave-height and the peak-period at the peak of the wave-storm, considering them independent from each other. This assumption derives into a series of computational errors that make the necessary computer simulations unreliable for operational purposes. Jue Lin has characterized a total of four variables of wave storms (significant wave-height and peak period at the peak of the storm, as well as total duration and energy of the storm) through generalized Pareto distribution functions, while their joint probability density has been characterized through a hierarchical Archimedean copula.

The next step has been to take the characterization, which assumes stationarity in a period comprising 1994-2014, to the level of non-stationarity. Wave projections in the period 1950-2100 has been used, and this non-stationarity, achieved through vectorial generalized additive models, can provide information related to the Climate Change. What is more, climate teleconnection patterns such as the North Atlantic Oscillation, or the less known East Atlantic pattern and Scandinavian pattern, are used as covariates in the vectorial generalized additive models, to establish their relationship to the threshold of the wave-storms, the yearly occurrence, as well as the parameters of the generalized Pareto distributions. The latter can, its turn, inform about the moving mean and variance of each variable.

Two study areas have been considered: the Catalan Coast and the northwestern Black Sea. Especially in the Black Sea, where measurements and projections are scarce, the characterization of a set of wave projections for the same period of 1950-2100, under two Climate Change scenarios, has provided new insight into the evolution of wave-storm events in the area.

SOBRE EL AUTOR:


Inicio

Decision support tools for supply vessel planning in offshore oil and gas logistics

PONENTE:  Irina GribkovskaiaIDIOMA: InglésLUGAR: Edificio C5, Aula C5016,Campus Norte, UPC (ver mapa)
FECHA:
Viernes, 27 d'abril de 2018. Hora: 12:30


RESUMEN:
Supply vessel planning problem arises in the upstream offshore oil and gas logistics, where supply vessels provide delivery of necessary materials and equipment to a set of offshore installations from an onshore supply base. Installations require periodic service due to the limited deck capacity and continuous supply for various type of cargo. Vessel schedules should be reliable since the downtime of an installation is too costly and at the same time cost-efficient as supply vessels are the most expensive logistics resource. The objective is to develop a least-cost weekly sailing plan used repetitively over a period of several weeks or months. We introduce several models and solution algorithms developed to handle deterministic variants of supply vessel planning problem, including emissions reduction through speed optimization. Weather conditions are a main uncertainty factor in supply vessel planning since sailing time and service duration at offshore installations are weather dependent. Bad weather conditions quite often lead to inability to deliver all the planned cargo according to planned schedule, resulting in the reduced service level and added costs of using extra vessels. Therefore, planners aim to create vessel schedules with sufficient robustness against weather uncertainty to avoid frequent schedule disruptions. We present several optimization-simulation tools for construction of robust, green and cost-efficient supply vessel schedules. We also present results of several simulation studies on strategic fleet sizing with uncertainty in weather conditions and on evaluation of different operational recourse strategies in response to weather.

SOBRE LA AUTORA:

Irina Gribkovskaia has been professor in quantitative logistics of Molde University College-Specialized University in Logistics (MUC), Norway, since 1999, and 20 previous years she worked at the Department of Optimization Methods and Optimal Control of Belarusian State University. Her current research interests include development of optimization and simulation models and algorithms for solution of vehicle routing problems with applications in offshore oil and gas logistics. Up to now she has supervised 7 PhD theses and she is author of over 30 articles and book chapters on vehicle routing published in EJOR, JORS, Networks, C&OR, ORL, TRD, TRC, Omega, INFOR, IJPDLM, etc.  She usually teaches courses on Mathematical Modelling, Vehicle Routing and Network Logistics for Master programs in Logistics Analytics and Petroleum Logistics at MUC, and gives courses on Offshore Petroleum Logistics at MUC, many other universities and oil and gas companies in Norway and abroad. She is a coordinator of a joint double-degree Master on Petroleum Logistics with Russian University of Oil and Gas, is a coordinator of research-based academic collaboration projects in Logistics Analytics, Arctic Logistics and Oil, Gas and Renewable Energy Logistics with largest universities in Russia, Belarus and Brazil. She has established close contacts within the oil and gas industry in Norway, Brazil and Russia, also giving her MSc and PhD students the opportunity to benefit from this network. Students perform their research based on up-to-date problems within the industry and in close cooperation with professionals from oil and gas companies.

Más información sobre Irina Gribkovskaia en su web.


Inicio

Diagnostics for ordinal response models

PONENTE: Ivy LiuIDIOMA: InglésLUGAR: Edifici C5, Aula C5016, Campus Nord, UPC (ver mapa)
FECHA:
Viernes 9 de marzo de 2018 Hora: 12:30


RESUMEN:
This talk has two parts. The first part provides some graphical and numerical methods for checking the adequacy of the proportional odds regression model using the cumulative residuals. The methods focus on evaluating functional misspecification for specific covariate effects.  An example of a data set from the Normative Aging Study, which studies the effect of two markers of oxidative stress in men in the age group of 48-94 years: white blood cell count and C-reactive protein on FBG measurement (measured in three clinically defined ordinal categories), is given.

The second part presents a new goodness-of-fit test for an ordered stereotype model used for an ordinal response variable. The proposed test is based on the well-known Hosmer-Lemeshow test and its version for the proportional odds regression model. One of the main advantages of the ordered stereotype model is that it allows us to determine a new uneven spacing of the ordinal response categories, dictated by the data. The proposed test takes the use of this new adjusted spacing to partition data. A simulation study shows good performance of the proposed test under a variety of scenarios.

SOBRE EL AUTOR:  Ivy Liu received a PhD in Statistics from University of Florida, Gainesville, Florida, USA. In 2000, she joined the Statistics and Operations Research group as a lecturer. Now, she is an associate professor in the School of Mathematics and Statistics at Victoria University of Wellington, New Zealand. Her main research area is in Categorical Data Analysis, including ordinal response data analysis, longitudinal data analysis, repeated measurements, and their applications in medical science. For more details on her research projects, visit her personal homepage http://homepages.ecs.vuw.ac.nz/~iliu/