Nihat Altiparmak received his B.S. degree in Computer Engineering from Bilkent University in May 2007 and his Ph.D. degree in Computer Science from the University of Texas at San Antonio in May 2013. He joined the Department of Computer Science and Engineering at the University of Louisville as a tenure-track Assistant Professor in August 2013, where he has been a tenured Associate Professor since July, 2019. His research interests include data storage systems, parallel and distributed systems, high performance computing, cloud computing, and computer networks. His research findings have appeared in top-tier international journals including IEEE Transactions on Computers, IEEE Transactions on Parallel and Distributed Systems, ACM Transactions on Storage, and ACM Transactions on Sensor Networks. He received the National Science Foundation’s (NSF) CISE Research Initiation Initiative (CRII) award in 2017, with his research in Automatic Storage System Optimizations, NSF's Major Research Instrumentation (MRI) award in 2018, to form a High Performance Big Data Analysis Platform at the University of Louisville, and a Best Research Paper Runner-up award from CloudCom 2019 conference. He is a Senior Member of the IEEE and the founding director of the Computer Systems Laboratory at the University of Louisville.
- Ph.D. in Computer Science, University of Texas at San Antonio, 2013
- B.S. in Computer Engineering, Bilkent University, 2007
Disk I/O is a major bottleneck limiting the performance and scalability of data intensive applications. A common way to address disk I/O bottlenecks is using parallel storage systems and utilizing concurrent operation of independent storage components; however, achieving a consistently high parallel I/O performance is challenging due to static configurations. Modern parallel storage systems, especially in the cloud, enterprise data centers, and scientific clusters are commonly shared by various applications generating dynamic and coexisting data access patterns. Nonetheless, these systems generally utilize one-layout-fits-all data placement strategy frequently resulting in suboptimal I/O parallelism. Guided by association rule mining, graph coloring, bin packing, and network flow techniques, this paper proposes a general framework for adaptive parallel storage systems, with the goal of continuously providing a high-degree of I/O parallelism. Evaluation results indicate that the proposed framework is highly successful in adjusting to skewed parallel access patterns for both hard disk drive (HDD) based traditional storage arrays and solid-state drive (SSD) based all-flash arrays. In addition to the storage arrays, the proposed framework is sufficiently generic and can be tailored to various other parallel storage scenarios including but not limited to key-value stores, parallel/distributed file systems, and internal parallelism of SSDs.