Illustrating Microservices in Action: A Closer Look at the Task Manager Application

In today’s rapidly evolving technological landscape, software development approaches have undergone a paradigm shift. One of the most influential trends that has emerged is the adoption of microservices architecture. Microservices architecture is a modern software design pattern that aims to build large, complex applications as a collection of small, independent, and loosely coupled services. Within this project, we have employed this design pattern to circumvent issues of race conditions within the server-side service during scalability enhancements.

The Task Manager is a component within the GenDiseak modules, tasked with handling the delegation of storage and analysis workflows (Figure 1). As the user base expands in the future, the GenDiseak service will require scaling up by deploying multiple replicas within the Kubernetes cluster to accommodate increasing requests. However, a challenge arises due to simultaneous database updates by these GenDiseak replicas, which can lead to potential race conditions (Figure 2).

Figure 1. Task Manager module in GenDiseak

Figure 2. Race conditions in replicas

To address the problem and prevent redundant task execution, the approach involves centralizing the Task Manager (Figure 3). You can visualize the Task Manager as a project manager and the GenDiseak replicas as the workforce. Upon receiving a task from a user or other processes, GenDiseak establishes a task entry in the database by interacting with the Task Manager. Subsequently, the Task Manager continuously scans for pending tasks and delegates them to the GenDiseak replicas (Figure 4).

Figure 3. Centralized Task Manager

Figure 4. Interaction between GenDiseak and Task Manager

A Cron task is under a specific task category. In the case of GenDiseak, it periodically monitors the Illumina BaseSpace for uploaded files and initiates the analysis pipeline whenever new sequencing files are detected. Let’s illustrate how many cron tasks collaborate with the Task Manager using an example. Initially, the Task Manager schedules the very first cron task. Once the scheduled time arrives, the initial task is assigned to GenDiseak for execution. Subsequently, if the task is completed, the Task Manager’s API is invoked to update the task status. This cycle continues for the subsequent tasks in a similar manner (Figure 5).

Figure 5. Cron tasks example

Within the project scope, the Sidecar pattern was employed for the Task Manager and its corresponding database. Additionally, the Strangler Fig approach was leveraged to replace the original Task Manager within the GenDiseak system. To sum up, the adoption of the microservices design pattern presents a revolutionary method in software development. This approach empowers our platform services to construct scalable applications by disassembling monolithic structures into discrete, self-contained components.

From Novice to Dual Roles in Just One Month: Immersive Internship Experience in Both Product and Engineering Positions

Infodemic PM:

I began my internship journey at AI Labs within the Infodemic department as a Product Manager intern. I spent the initial days understanding our target clients and the specific services AILabs and the Infodemic team wanted to deliver with this product. After gaining a baseline understanding of our product, I conducted a comprehensive competitive analysis to identify industry gaps and pinpoint opportunities, helping us carve out a unique value proposition.

During the next part of my internship, I attended business meetings with prospective clients from foreign countries. Based on insights from various stakeholders, potential partners, and my understanding of market needs, I crafted a product requirements document that delineated a clear user flow, ensuring our product would align with potential partners and their specific demands.

As my tenure with the Infodemic team came to a close and I transitioned into a different project, I compiled a glossary for the Infodemic.cc project. This document was intended to ensure continuity and shared understanding among team members working towards the product’s general release.

During my time as a product manager for the Infodemic team, I deeply value the insights and experience I gathered from managing different facets of the software release cycle for an inventive new product. I would like to give a special thanks to Chloe for being the best mentor I could have asked for. Her mentorship was pivotal when I was first joining AI Labs, teaching me so much and showing me what it means to be a proficient product manager.

Yating Music Studio SWE:

Upon joining the Yating Music Studio Project, I made it my first priority to understand our product’s market positioning. This involved learning about not only our own product’s offerings but also the offerings of other products offering similar services. For new entrants into this industry, distinctly differentiating their products seemed pivotal to ensuring their success, and even then, products that were unable to carve out a unique niche still seemed to struggle considerably within this industry.

With this new understanding, I spent some time testing the Music Studio application, subsequently putting together a prioritized list of items that I felt needed to be repaired or enhanced in order to realize our goals. I spent time tackling these priority items – enriching the user experience and modifying the application. In addition to tackling these priority items, I also set up test suites for the application using Google Test and Google Mock and repaired faulty source code, pushing the product closer to its general release.

I would like to extend a heartfelt thanks to Benson. He graciously welcomed me into his team, providing me with the opportunity to contribute and develop software for his project. His mentorship and trust in my ability are what allowed me to develop and grow immensely in my role.

Final Thoughts:

My internship at Taiwan AI Labs has been an incredibly enriching experience. I have had the privilege of being a part of two distinct software development teams, serving in two contrasting roles: product manager and software engineer. Contributing to a rapidly moving product from these two vantage points has provided me with invaluable insights into the challenges of each role and this dual perspective is something I will carry forward into all future projects.

For Exploring Medical Frontiers: Deep Learning for ICU Tabular Data and Image Registration

Tabular data learning
Introduction

Artificial intelligence(AI) has significantly transformed the landscape of medicine. AI  has proven its effectiveness in aiding clinicians by facilitating diagnoses, finding new treatment regimes, and even predicting disease prognosis. However, while AI models have been successful in interpreting medical images, the realm of medical tabular data, which is routinely collected for daily medical usage, remains largely unexplored and challenging for model training. Several key factors contribute to the difference between image and tabular data:

 

Contextual Information: Medical image data encapsulates a wealth of contextual information within its visual representations. On the contrary, tabular data presents a sparser landscape. Pixels within an image often exhibit intercorrelation with their neighboring pixels, and therefore image models can theoretically find anything from the images. In contrast, the values in a column of a table do not inherently bear any guaranteed relation to adjacent columns. This inherent distinction necessitates extensive domain expertise to curate meaningful feature selection and engineering before effective model training.

 

Inherent Data Noisiness: While noise within medical images can be readily identified, localized, and rectified by individuals without advanced medical knowledge, challenges arise when handling tabular data. For instance, a layman might be able to identify the values outside of the normal range of blood pressure, yet may lack the awareness to rectify data where diastolic pressure is higher than systolic pressure. Such anomalies likely result from handwriting errors during data documentation and can be intricate to correct if someone doesn’t even know it is an issue in the data.

 

Pervasive Data Sparsity: Medical data frequently presents high levels of sparsity. Tabular medical records are primarily recorded for medical purposes and patients have distinct combinations of data because of their highly different medical status. Consequently, it’s common to see that features A, B, and C documented in patient X’s medical record can’t be found in patient Y’s records. In other words, lots of missing values (around 75%-90% of the cell) will be expected to be seen in the table. 

 

In this article, we aim to address these three challenges by employing the contexts of intubation and sepsis prediction as illustrative examples. The dataset concerning intubation was fetched from the Taipei Medical University database, while the sepsis data was downloaded from the 2019 sepsis early prediction open dataset on Kaggle.

Method

Data collection:

Intubation: The intubation data was fetched and curated from the Taipei Medical University database, which resides on NAS. This dataset comprises 44 laboratory features (blood gas analysis, regular blood tests, renal profiles, metabolic exams, and liver functions) and 7 dynamic features (diastolic/systolic/mean arterial pressure, breath rate, heart rate, O2 saturation, and temperature). The curation of this data was based on medical knowledge. Extreme and unreasonable values (e.g., temperature = 370, which should be 37.0) were removed. The labels and timestamps indicating whether and when a patient was intubated were derived from the database’s billing time log. Following the patient selection pipeline, approximately 7,700 patients with qualified data were included, of which 3.9% were intubated (designated as the positive tag).

Sepsis: The sepsis dataset was downloaded directly from Kaggle. This dataset comprises 27 lab features, 7 dynamic features, and 5 static features (such as Age, Sex, ICU unit, etc.). The data was curated using medical knowledge, and extreme and unreasonable values were eliminated. Following the patient selection pipeline, approximately 40,000 patients with qualified data were included. Among them, 1.1% of the patients were labeled as positive (sepsis occurred within the stay in the ICU).

Data curation:

Data curation is one of the most challenging and pivotal tasks in tabular data learning, particularly when considering the ETL (Extract, Transform, Load) pipeline. An engineer responsible for data retrieval may not be familiar with distinctions between white blood cell (WBC) counts in urine, blood, or cerebrospinal fluid (CSF). Consequently, data might be extracted for various medical purposes and different value ranges, all of which could be placed within a single column.

 

A straightforward approach to identify this issue is plotting the data distribution before any preprocessing or post-processing steps. This provides an initial impression of the data. Generally, most medical data should exhibit a unimodal distribution, closely resembling a normal distribution, or displaying left or right skewness. For instance, consider the distribution of recorded heart rates (HR):

Although some extreme values might intrigue one’s attention, the value 280 might result from a physically possible tachycardia event and the minimal value of 28 could be the result of a dying patient. Do not eliminate data before second inspection. Let’s dig out the patients generating these extreme values.

The data in the right plot is more comprehensible. This patient exhibits occasional tachycardia, likely due to medication. On the left plot, the patient’s data is considerably more intricate. The plot displays extremely high and low HR values. However, both scenarios are observed in clinical practice. As engineers, we should not cap or discard a single data point solely based on value thresholds. Instead, if we have the luxury of losing some patients in the data, I would like to drop this patient from my training set (the patient on the left-hand side).

 

Unfortunately, numerous potential scenarios demand thorough examinations. For instance, the distribution of PTT values exhibits an unusual spike at 150 and 250 units, which corresponds to the maximum time interval set by hospitals for blood coagulation measurement. On the contrary, the abnormal distribution of calcium levels results from erroneously recording calcium values from varied measurements and units—a significant oversight. The process of data cleaning proves to be intricate and laborious. It is my belief that the optimal strategy to navigate this challenging endeavor is through close collaboration with clinical professionals.

Modeling:

There are at least two ways for modeling the prediction questions. 

  1. Static prediction (or left alignment): Based on the first 24-hour data after entering the ICUs, what’s the possibility of a patient getting sepsis during the ICU stay?
  2. Real-time prediction (or sliding window): Based on the data in the previous X-hour window, what’s the possibility of a patient getting sepsis in the next 24 hours?

XGBoost was selected for solving the left alignment problem due to its excellent ability of handling missing values. For the sliding window problem, we test the commonly-selected LSTM model for multivariate time-series prediction.

(a) Observation window: the data used for building models. Prediction window: if an event occurs in this window, the instance (patient) will be considered positive. (b) Sliding window: the observation and prediction windows will change with time evolving. (d) Left alignment: all instances have the same starting point and size of observation window. 

Image courtesy: https://www.nature.com/articles/s41746-021-00529-x

Feature augmentation and time compression:

For the left alignment task, a single instance can consist of several rows of data. One approach to summarize multiple data points within the observation window is through statistical summarization. To be more specific, we have chosen the minimum, average, and maximum values of a feature of an instance within the window to represent its behavior over this time span. This approach also offers the advantage that if a feature has at least one data point within the observation window, it will not result in troublesome missing values for the models later on.

Baseline:

The recent consensus for diagnosing sepsis is the Sepsis-3 agenda, which was published in 2016. In this agenda, a diagnosis of sepsis is considered when a person exhibits clear manifestations of infection and shows an increase in the SOFA score of 2 or more. It is worth mentioning that the Sepsis-3 consensus is only used for making current diagnoses and not for forecasting future occurrences. In our approach, we utilize a WBC level over 10,000/μl as a proxy for infection and estimate the SOFA score accordingly. We apply the Sepsis-3 criteria in an off-label manner to predict the occurrence of sepsis in the future. We then use the results as a baseline for comparison with our models.

Result

Intubation prediction, left alignment

From the table, we can see that the XGBoost model significantly outperforms random guessing (with a prevalence rate of 0.04). With this model, a doctor is capable of predicting whether a patient is highly likely to require intubation and ventilation during their ICU stay. The model performs even better when it can access the first 48 hours of data instead of only the first 24 hours. However, this improvement was not observed when we further extended the observation window from 48 to 72 hours.

 

Another important thing is this model’s explainability. Let’s examine the feature importance:

The foremost influential factor (is_op) in forecasting intubation is whether the patient is admitted to the ICU after a major surgery. The second factor (RBC_max) is the maximal concentration of red blood cells in the data window, which is associated with the patient’s oxygenation ability. The third factor (BUN_min) is an indicator of a patient’s renal function, closely tied to the assessment of osmolality. The fourth factor (bf_min) is the slowest breathing rate recorded in the data window, which directly evaluates a patient’s lung function. All of these factors are medically connected to determining a patient’s need for intubation.

Sepsis prediction, left alignment

The second task is to build a model on a more homogenous dataset and try to repeat the success we did on the intubation task. 

Similar conclusion! The XGBoost model is again capable of forecasting sepsis occurrences! But the increment from 48 to 72 hours is not prominent. When compared to the Sepsis-3 criteria, our XGBoost model clearly outperforms in all the metrics measured.

The prediction highlighted systolic pressure (SBP). This leads us to speculate that the model aims to predict organ failure and the consequent hypotension (low systolic blood pressure). The lung function is additionally assessed by oxygenation level (O2Sat_min) and respiratory rate (Resp_min).

Sepsis prediction, sliding window

Finally, we challenged ourselves by transitioning to sliding window prediction. Sliding window, also known as real-time prediction, is of paramount importance in the clinical realm, as it offers instantaneous monitoring and awareness of sepsis occurrences.

 

Unlike the left alignment approach, sliding-window-based prediction is exceedingly challenging due to the highly imbalanced class labels. Consider a patient who was diagnosed with sepsis at the 72nd hour. In other words, when we break down this patient’s data into an hourly-based resolution, there are 71 negative data points and only 1 positive data point (the data points after the occurrence of sepsis are not significant and were not recorded in the dataset). Consequently, the sliding window model must detect subtle time-variant signals from an incredibly imbalanced dataset. In the end, merely 0.18% of the prediction cases were labeled as positive, establishing our random guessing baseline.

 

Our current model demonstrates a time-independent AUC of 0.72, albeit with a less favorable f1-score of 0.008 and an AUPRC of 0.008. At first glance, this might not appear overly promising. However, it represents a significant improvement over using the Sepsis-3 criteria for predictions, where the f1-score is 0.00, AUC is 0.50, and the AUPRC is 0.002 for the same task. When interpreting these results, it’s important to keep in mind that this model is aiming to detect the proverbial “needle in a haystack”—a target occurrence of one in a thousand. From this perspective, our model has performed fairly well. Nevertheless, there remains substantial room for improvement, and I’ll leave this final step for you to explore.

 

Image registration

Introduction

It is common for patients to have more than one image, either from the same or different modality. Clinicians can benefit from obtaining images from multiple modalities or time points to gain a holistic understanding of the patient’s condition. For example, a doctor may require daily X-rays to assess a patient’s pneumonia status. One pivotal aspect of this integration is image registration – a process that aligns multiple images of the same or different modalities into a unified coordinate space.

 

However, the integration of imaging data poses challenges: the images are often acquired in different positions, orientations, and scales, making direct comparisons or joint analyses cumbersome. This is where image registration emerges as a crucial solution. By computationally and automatically aligning images from different modalities or multiple time points, medical professionals can create a unified visualization that enables them to integrate information from various sources. Moreover, a deep learning model trained on these sets of images is likely to achieve higher performance and robustness.

 

Medical image registration is not a trivial process. Due to the intricacies of the human body, a simple affine transformation (translation and linear transform) is incapable of delivering satisfactory results. Moreover, learning targets for the ground truth displacement field, a field (or matrix) that dictates how to shift the pixels from the source image to the target image, are rarely available. This characteristic renders supervised learning impractical. Most significantly, traditional registration methods typically require over 7 minutes to complete the task for only one single image, making these conventional algorithms unlikely to be deployed for practical use.

 

VoxelMorph is a deep-learning-based registration model that directly learns the displacement field from the source and reference images. Unlike traditional methods, VoxelMorph can generate a registration displacement field for a 3D image within 0.1 seconds. At its core, VoxelMorph takes two images (source and reference images) as inputs to its U-Net-like architecture and learns the relationship between the pixels of these two images. The output field can be directly applied to the images and appended with image features (such as segmentation masks or key landmarks). Notably, VoxelMorph is an unsupervised model, making it ideal for handling diverse clinical scenarios.

This article is to test VoxelMorph’s ability on registering MR images and pave the way for our future use.

 

Method

Objective function

Courtesy: From the VoxelMorph paper.

 

The objective function of training a VoxelMorph model is simple. We are attempting to find a function Φ that can be applied to the source image (denoted as “m,” representing the “moving” image), and the resulting images, mΦ, should closely resemble the reference image, denoted as “f” (representing the “fixed” image). Additionally, we apply a universal regularization term to the model weights to prevent overfitting on a single image.

Data collection

We downloaded the data directly from the Learn2Reg Challenge and selected the OASIS dataset for conducting the registration test. This dataset comprises 454 T1-weighted MRI images, and our objective is to register any two randomly chosen images and achieve anatomical alignment between them.

 

Result

The visualized result is presented in the figure below. As evident, one of the major distinctions between the source images (moving images) and the reference images (fixed images) lies in the size of the ventricles – the black chambers within the brains. Upon applying the model to the images, the registered images (moved images) present ventricles of similar sizes to the reference images. Consider the scenario where ventricle volume estimation is crucial for diagnosis (e.g., enlarged ventricles possibly indicating brain fluid injury or blockage, such as CSF reabsorption issues); this registration process could significantly impact the model’s prediction accuracy and overall performance.

Except for the visual results, we also calculated the dice scores between the segmentation masks (important regions in the brains) of the registered images and the reference images. The average dice score among 49 patients in the test set is 0.976, verifying the success of our trained model.

 

Ensure High Availability for Services: Endpoint Monitoring with AWS

HA and Monitoring

Availability is a quality attribute that, in general, refers to the degree to which a system is in an operable state. High availability then describes the set up of infrastructure that prevents a single point of failure and decreases service downtime. In Taiwan AILabs, the Engine team focuses on assuring not only developers but also end-users of a high level of service reliability and operational performance.

 

Monitoring is one of the characteristics that help us achieve high availability. We utilize production monitoring to learn how our services are performing in runtime and reduce the time to detect and mitigate failure, which refers to TTD and TTM.

 

AWS Route53 Health Check

To perform production monitoring and maintain high availability at the same time, we need to ensure there will not be a single point of failure for our monitoring service. That being said, if we run the monitoring service on a single instance and from a fixed location, the failure of the monitored endpoints will be masked if the instance is down. We will never know if the monitoring service and the endpoints are alive as we will not be receiving any alerts.

AWS Route 53 then becomes our go-to option for monitoring external endpoints. Route 53 has its health checkers located in multiple availability zones around the world. When we create a health check on route 53, health checkers across the world will start sending requests to the endpoint, determining if the endpoint is healthy and operational based on response time and number of responses within the failure threshold.

We utilized the “multiple availability zones” characteristic of Route 53 health check to maintain high availability for our monitoring service. Even if one of the health checks in one AZ breaks down, the health checkers from other AZs will still be sending HTTP requests to the endpoint we are monitoring, ensuring high availability for our services. The basic architecture of our monitoring service can be presented as the diagram below.

Figure 1. The architecture of the monitoring service

 

Route 53 health check will be sending HTTP/HTTPS requests to the endpoints that we would like to monitor (e.g., ailabs.tw). We will then connect our health checks to CloudWatch alarms respectively. Whenever Route53 health check switches the state of the endpoint (e.g., HEALTHY to ALARM, or ALARM to HEALTHY), the alarm on CloudWatch will be triggered. In order to receive notifications when the alarm is triggered, we will need to configure an SNS topic with the CloudWatch alarm and subscribe to a lambda function that would route the content of the alarm (in JSON format) to a slack channel.

Figure 2. Health checks on CloudWatch Dashboard

 

IaC

To manage the infrastructure that we deploy on AWS more efficiently and automatically, we adopted the concept of Infrastructure as Code (IaC). Infrastructure as Code is the process of managing infrastructure in a file or files rather than manually configuring resources in a user interface[1]. It helped us to keep track of the resources that we’ve deployed so that we can control the budget and manage our resources more easily.

We developed a Terraform module that allows us to automatically provision 7 resources, including IAM role, health check, CloudWatch alarm, SNS topic, topic subscription, Lambda, and Lambda permission on to AWS in a single command: terraform apply. Developers who would like to monitor an external webpage would only need to specify the domain names, the port, the protocol(HTTP, HTTPS or TCP), the webhook, and the channel name of their slack channel in a configuration file (main.tf) and run `terraform apply`.

Figure 3. Example Usage of the module (written in HCL)

 

In addition to the Terraform module, the Engine team also developed a CI/CD pipeline for the repository that provides resources on AWS. Once the user pushes their configuration file (main.tf) to the repository in GitLab, the CI/CD pipeline will be triggered and start running automated scripts to build and test the configuration. After validating the configuration, users can deploy the resources on AWS without installing Terraform on their local machine since the deploying job will be done by the Docker executor on GitLab and we’ve already installed Terraform on the docker image. The CI/CD practice not only helps create an effortless deployment for engineers but ensures that all the infrastructure deployed to AWS complies with the standards of Terraform and Engine team standards.

 

Reference

[1] Infrastructure as Code with Terraform | Terraform – HashiCorp Learn. (2020). Retrieved 3 August 2020, from https://learn.hashicorp.com/terraform/getting-started/intro?_ga=2.74354834.1030159239.1596423434-594189736.1592893931