Read about OpenEMR's Response to the COVID-19 Pandemic at

Project- Edge AI in healthtech

The number of internet-connected devices is continuously increasing, along with the increasing demand of various forms, and sizes, of data. Data collection in Internet of Things (IoT) devices has become more complex, especially in convoluted (sensing) systems, thus, requiring more storage and computational capacity on the cloud.

In addition, real-time analytics, essential in autonomous systems, requires real-time decision making from field-deployed Edge devices. Furthermore, Machine Learning and Deep Learning are becoming more accessible to Edge devices for Edge AI applications, more notably in classification and object detection applications. Edge nodes allow for real-time management of collected data, where the data can also be stored, processed, filtered and then sent to the cloud for further analytics.

Edge AI is starting to become more integratable within embedded devices, some examples include Mini-YOLOv3, TensorFlowLite, as well as API-connected devices to CustomVision and Google Vision. However, the vast majority of individuals with domain knowledge in sectors that would greatly benefit from recent technological advancements do not have the sufficient hardware or software backgrounds to effectively apply the technology.

Project goal:
In healthcare, pathologists take on average of 2-4 weeks to diagnose patient samples, where inspecting samples mostly includes examining samples under a microscope and taking images of findings.

The process is time consuming, redundant, and has an approximation of 90% accuracy.

Recently, a Google research team has used deep learning to detect/classify breast cancer, more accurately than doctors.

In a further step, my project goal is to utilize images (or even live feed from microscopes) and an edge device (raspberry pi) to run a classification ML model. The aim is to automate classification as well as reduce false negatives.

Bonus (definite maybe): Current Covid-19 classification is done by manually inspecting CT scan images of patients’ lungs to detect certain anomalies.

Whoever is willing to mentor me!

Are projects usually done in groups, or are they individual projects?

Hi @ejri . This does seem like a pretty great fit for my skillset. I’d be happy to take a look at your proposal. You can find me on the openemr slack or email me at

Hey @ejri, The main limitation here is the dataset. For example, the google brain team had collected the x-rays by working with the government themselves, the NYU team has teamed up with NYU Langone hospital to access the x-rays, etc. Of course, there’re some public datasets, but models trained on them will NOT generalize.

I would love to chat more in-depth. I might have some solutions regarding the above challenge. Please feel free to reach out at

Hi @ejri, I am Sr. software architect. interested in your. let’s chat more,