Automated Blood Scanner

Imagine a world where any disease or illness that is visible in the blood would automatically be found and diagnosed at home during a routine preventative health self-check.

Fast, Cheap, and Accurate Health Check

Each week or whenever you aren’t feeling 100%, you put a drop of blood on a small device and 30 minutes later it has a full and accurate report of every abnormality in your blood.

This could save billions of lives. (At least those who die from treatable illnesses.)

That is not far off.

In fact it is possible today, but no one has done it yet.

What makes this possible is a combination of two fields that have made huge breakthroughs recently.

  • Machine Learning
  • Micro Robotics

The first technology is called machine learning and more specifically deep learning using neural networks. This branch of Artificial Intelligence is especially useful in finding patterns in images. It is used in everything from facial recognition to self driving cars to diagnosing lung cancer.

But one area where this has yet to be used broadly is in blood screening. From my research, I could only find an article using this technology to diagnose malaria specifically.

The second technology is micro robotics. These are small electronic systems like Adruino or Rasberry Pi that can be combined with sensors and motors to create just about any robotic gadget imaginable.

So combining these two fields, my idea is possible today.

Machine Learning

First, let me explore how machine learning would be used:

The idea is to create a continuously learning open neural network that can diagnosis anything visible in blood. This of course would only be possible with the participation of millions of people. But by making a continuously learning system each new person providing images of blood samples and answering questions about their own status would add greatly to the system’s knowledge.

We know it is possible to build a deep learning network that can learn to recognize thousands of diseases automatically. This is really not that different from the task of facial recognition or other image recognition tasks.

However, in order for it to work accurately, it needs many samples. For each specific disease or illness, it needs at least 10 individuals who are diagnosed with that prognosis. Each individual could provide a sample of blood that could be scanned to create thousands of images. In many cases, that would be enough for the system to identify unique characteristics.

Of course, even without those samples, the system could be aided with existing medical imagery from data sets that have already manually identified known patterns. That would give it a start for certain ailments.

However for this to be effective for general disease and illness diagnosis, this needs millions of participants providing samples. But of course, once it got to that point, it would continue to learn and improve.

Cheap Automated Blood Scanner

Now, in order for this system to work, it would need a way to scan the blood sample.

Imagine a 3D printer on a very small scale. Attach any smart phone with a spherical lens and move the blood slide under that lens while taking pictures throughout the sample at various focal lengths.

For a single blood sample thousands of images could be uploaded and these images would then be analyzed by the machine to look for patterns associated with thousands of ailments.

This device could easily be made with Adruino for less than 100 dollars today.

Hopefully, if it was mass produced it could be made for less than $5.

Really? Save Billions of Lives?

So how could this possibly be available to billions of people?

First of all, internet is available in almost every country and it has decent speeds. Data plans may be too expensive in some areas, but a huge percentage of the world has access to broadband internet and smart phones.

In fact I just checked and around 2 billion people have smart phones which is more than 1/4 people in the world. That number will only continue to grow.

In fact, here are some related products that actually sparked my idea:

Foldscope

Paperfuge

The Idea: Automated Blood Scanner

So what is needed is a cheap device to go along with these that can use existing smart phones to upload blood images to the internet.

If the Automated Blood Scanner can be produced for under $5, it is cheap enough for even a farmer in Africa to afford. Imagine if he could have this device at his home and use his phone to check his family for early signs of malaria. Before he or his children even get sick with malaria, he can go ahead and test the problem before the parasite gets out of hand. Instead of spending a week trying to save his child’s life, he can treat his children early and everyone can work and go to school without the constant sickness that destroys all resources and progress.

There are many problems that destroy resources in developing countries and make it hard to afford improving living conditions. However, one of the most destructive is malaria and other sicknesses that lead to death and beyond all the pain and suffering also destroy all the family’s time and resources that they have been working hard to gain over the past years.

Getting Started

Well one clear way is to initially focus on a specific ailment like malaria. If the system could be built that can diagnose malaria quickly cheaply and reliably, this could be used around the world by billions of people and start saving lives immediately. Then, while saving lives from Malaria, it could also be learning about everything else found in blood at the same time.

Patients could volunteer information to the system to add labels to their blood images. Simple things could be included like height, weight, race, and location. Lifestyle questions that affect health could be included: Do you smoke, do you drink, do you eat broccoli, etc.

And of course those who are willing, could also share if they have been diagnosed with various ailments by doctors already: High blood pressure, heart disease, lung cancer, HIV, etc.

Anything about the patient could be added to label those specific images. The system would associate the unique patterns in those images with those labels.

Diagnosing Everything

This is how the machine learns to diagnose. When it sees this pattern again in a new image, it can add the associated labels and score it with a % match. These results could be sent to a human doctor to verify, but quickly it would surpass the ability of human diagnosticians.

It will even find patterns that experts have never recognized before. Just like these systems for diagnosing lung cancer or sepsis:

Diagnosing Cancer with Machine Learning

Prevent Sepsis Using EKG with Machine Learning

Beyond Diagnostics

The system is not restricted to learning to diagnose. It can also learn complex patterns between blood images and medicine.

Another words, it can learn what medicine or treatment works best in specific situations.

For example, if every person with malaria has a specific pattern in images of blood cell, whenever this reliable pattern is seen again, the machine can report that the blood image has a 97% correlation with malaria. But at the same time it can find other correlations: maybe the person has malaria, HIV, high blood pressure, and an infection that could easily spread quickly. This combination of ailments is a recipe that could quickly lead to death. Perhaps the typical medicine that is used for malaria needs to adjusted because of the weak immune system.

Someone with HIV and malaria could respond differently than a patient with high blood pressure for example.

This is where the diagnostic machine becomes a prescriptive expert. If a patient enters the medicine they are taking and continues to supply additional blood images, the machine can literally see the effect of different medicines in the blood. In fact, it could even learn to recognize the unique patterns that indicate the presence of specific medicine in the blood.

The patient can also enter additional information like whether they are feeling better or worse; whether they are gaining weight, or losing it; whether they are back to normal life or laying in their death bed.

So as the medicine has it’s effect, it can be correlated with a success or failure for the treated ailment. But at the same time it is also correlating the information with everything else it knows about the patient. After millions of patients, it can quickly learn complex pattern and prescribe the best treatment for each person. For example, overweight patients with high blood pressure and heart disease may respond better to a specific medication with less side effects. On the other hand, an athletic person with HIV might need an entirely different medicine for the maximum success rate.

Revolutionary and Should Exist Today

This technology is revolutionary. It is possible. It can exist today and it should exist today. All we need to do is make it and keep making it cheaper and better and more accurate.

I imagine after five years this system could completely change everything about life expectancy especially in the majority of the world where lab technicians are over worked and treatment is done without accurate diagnostics.

Public Domain Idea

This idea belongs to humanity, for the good of all, for the honor of our Creator - that we may show our love to one another.

Another words, I am posting this idea publicly to prevent any patents. This is not an invention that should be patented. This is an idea that should be available to everyone. Let the free market make as many versions as possible as cheaply as possible and available to all.

This idea covers as broad a mechanism as possible to declare any usage as free from patent:

  • Using any driving mechanism (i.e. motors, springs, gears, etc.)
  • To move any digital camera (i.e. smart phone, micro camera, etc.)
  • Over any body fluid sample (i.e. blood, urine, saliva, etc.)
  • Prepared with any technique (i.e. slide, smear, capillary, centrifuged, etc.)
  • Along any direction (i.e. 3D pathways, 2D pathways, etc.)
  • With any transformations (i.e. focal length, light source, etc.)
  • For any metric (i.e. blood count, disease prognosis, infection prognosis, etc.)
  • Analyzed with any software (i.e. neural network, genetic algorithms, etc.)
  • To generate any recommendation (i.e. medicine prescriptions, treatment schedule, etc.)

Azure Mobile Center

After watching a YouTube video about ReactXP, I was reading about it in my phone and this video from React Conference auto played. It didn’t have an interesting title, so I was about to turn it off, when I noticed it was big enough to be given it’s own subdomain at Azure.

Now, Microsoft is the king at making great dev tools, so I thought I should pay attention if this is their primary offering for mobile development on Azure.

It’s awesome:

  • First Class Support for React Native
  • Analytics, Events, and Crash Reports that give stack traces in the original source code (using source maps)
  • Code Push that is triggered by git deploy (Deploy changes directly to an installed app from the git release branch)

The video also showed some prototypes of some interesting testing tools. For example, he had the app open in multiple test devices and the UI events triggered on one device were synced to another device across the network (parallel testing).

I recommend watching the video and seeing the cool.

Videos

Presentation at React Conf 2017

CodePush

ReactXP

Finally, I found what I have been looking for years and experimented with creating multiple times:

A way to develop native and web apps with the same code base (even the view layer).

Let me be specific, I’m not talking about hybrid apps. Of course it’s been possible to use Cordova and make a hybrid web app for many years, but the performance was not acceptable.

We need native speed which means using native UI Components on the UI thread not blocked by the runtime or single thread of JavaScript.

In the past few years, two major solutions have come out: React Native and Native Script.

However, although these both provide a good solution for native speed mobile apps using JavaScript code, they don’t have a unified UI with the web browser.

So view layer was still not cross platform and using them while trying to share a code base with the web was not easy.

Leave it to Microsoft to solve it. They did. The Skype team needed the same thing I wanted: a single code base across all devices including the browser.

So they made ReactXP. It is a library built on top of React and React Native that unifies the view layer using standard components.

Now it is possible to define the view layer once and know it will look the same across all devices and browsers.

Also, because it is built on top of React Native, dropping down to platform specific code is possible when needed. So that’s great when you want to expand your base web experience with extra device features in the app.

As a bonus, because this is built by Microsoft who created Typescript, Typescript is fully supported without having to figure it out as you go. (They also support Flow as an alternative.) But if you have used Typescript and know some of the great advantages of using cutting edge JavaScript with type safety and documentation, you know it’s awesome.

So this is finally the solution I have been waiting for:

  • One UI to rule them all,
  • One code to define them,
  • One build to make them all,
  • And in the stores deploy them.

Or just use the web app if you don’t want to install, etc.

Videos

ReactXP First Look

GraphQL

http://graphql.org/

GraphQL brings the power of the data query all the way down to the component level.

Most data access requires multiple repetitve layers that slightly transform data:

  • Storage (Sql or NoSql Data Store)
  • Server (Rest or Ad-hoc Web Api)
  • Client Side Http Requests (Ajax, RxJS, etc.)
  • Client State (Angular Services, React Redux, React Mobx, etc.)
  • Components

In the above architecture, a query about a single object must be implemented at each level. Also, common patterns like filtering and paging must be handled at each level as well.

To make a change on the UI that requires additional data about an object, each layer must be modified to provide that additional data. Not only is this tedius work, it also affects the entire code base and could easily introduce bugs requiring testing at each level.

GraphQL Solves Client Data Access

GraphQL solves that problem by allowing the client to define the structure of the query. And even better, each component can define it’s own data needs.

  • Component Data Query
  • Component

Sample

Here is a sample using React-Apollo (ES6):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
function TodoApp({ data: { todos, refetch } }) {
return (
<div>
<button onClick={() => refetch()}>
Refresh
</button>
<ul>
{todos.map(todo => (
<li key={todo.id}>
{todo.text}
</li>
))}
</ul>
</div>
);
}
export default graphql(gql`
query TodoAppQuery {
todos {
id
text
}
}
`)(TodoApp);

The actual GraphQL Query is here:

1
2
3
4
5
6
query TodoAppQuery {
todos {
id
text
}
}

In this example, the query is requesting todos with the id and text of each.

Whenever, it is time to display this component, that query is automatically requested and then injected into the React component’s data object.

Without getting into the details of how that works, whenever something triggers the UI view to change, all the new queries will be combined together into a main query that is then sent to the server endpoint.

What is even better, most client side implementations automatically cache the requests and any data already availabe in the cache will not be requested from the server a second time (of course this can be controlled in case a fresh copy is needed).

So How Does the Server Provide the Data?

On the server side, the server has a single GraphQL endpoint that will receive all requests. It is the responsibility of the server to parse the request and give only the requested data back to the client.

Of course, the standard libraries handle the parsing and the developer has only one requirement:

Resolve the data request for each data type:

1
2
3
4
5
6
7
8
9
10
11
12
const resolverMap = {
Query: {
author(obj, args, context, info) {
return getAuthorById(args.id);
},
},
Author: {
posts(author) {
return getPostsByAuthorId(author.id);
},
},
};

The advantage of this, is that the developer can focus on a single data type at once without worrying about nested types or deciding how deeply nested the response data needs to be.

In the example above, each has a specific purpose: return a single author or return the posts that belong to an author:

1
2
3
4
5
...
getAuthorById(args.id)
...
getPostsByAuthorId(author.id)
...

The getAuthorById doesn’t have to worry about whether to return posts or what data about the nested posts might be needed by the client. It just returns the author data only. Likewise, the getPostsByAuthorId has a very clean purpose and doesn’t have to worry about nested objects.

By returning on the object and the entire object, the framework can automatically prune unnecessary data and combine the multiple objects into the requested object graph which was requested by the client.

Also, the GraphQL libraries support Promises and so those resolver methods can use async/await and have a very clean implementation.

The end result, is that the GraphQL library will wait for all the promises, combine all the data, prune extra data, and return exactly what was requested in a single network response.

So How Does the Client Change the Data?

Another cool part of GraphQL is how data changes. In GraphQL, a data change is called a mutation.

The nice thing about a mutation is that it supports Optimistic UI. This means that the ui data is modified locally on the client while being processed on the server. This allows the UI to update with a preview of the data change while waiting for the server response. Then when the server sends the actual response, it can replace the temporary optimistic data. This is one of the benefits of a mutation compared to a simple Rest Post.

On the server side these mutations must be implemented in the same way as the resolvers:

1
2
3
4
5
6
7
8
9
10
const resolverMap = {
...
Mutation: {
addAuthor(_, { firstName, lastName }) {
...
return author;
},
},
...
};

The differences with a resolve query are easily seen below:

1
2
3
4
5
6
7
8
9
...
Query: {
// Get an author (args contains the author id)
author(obj, args: { id }, context, info)...
...
Mutation: {
// Create an author (args )
addAuthor(_, args: { firstName, lastName })...

Really, the only difference is the Query and Mutation keyword. They both use the 2nd parameter as the args and use those args to either get data or modify it.

This Looks Cool, Too Bad I Can’t Use it

That’s where it get’s interesting. It is possible to use the parser on the client side and basically wrap a Rest Api with a GraphQL schema which you can use in your client side code.

Then, the next step is to move this to the server and slowly replace the REST calls with direct calls to the necessary resources.

This provides an adoption path where it can be used in client side web apps today.

This video gives a good overview of that process:

https://www.youtube.com/watch?v=UBGzsb2UkeY

Notes

I found two implementations of the GraphQL:

I also found a VSCode extension that provides all the cool editor features we love:

Zero to GraphQL in 30 Minutes

Feature-Oriented Software Development and Feature-Organized Code

Software development is all about the apps. If you don’t have a user interface, then your software is pointless. There must be a way for somebody to work with your software to accomplish what they want to do. These are the features: what your app can actually do. Really, an app is nothing more than a collection of features. The best apps have few features that are closely related and are exceptional in quality.

Therefore, software development should focus on developing features.

All too often, we developers focus on the code, “Look how cool that code is.” Users don’t care. They want a cool feature that works – that is all. They don’t care about your hard work. If you don’t give them exceptional features, they will trash your app (1-star) and nobody will ever touch it again.

We must focus on the features. We must make exceptional features that shine. That is the entire purpose of software development. In fact, maybe we should just drop that term and call ourselves “feature developers”. Software is pointless unless it contributes towards exceptional features.

Background

We developers are overly focused on our code. Look at the OOP (Object Oriented Programming) paradigm for example. It’s main concern is how you design your class hierarchy.

That has little to do with a feature. So we have Use Cases where we try to figure out every possible way an object might be used. That is getting closer to the features, but it is putting the developer’s concern (class design) ahead of the user’s concern (features).

OOP solves many problems, but it also introduces a way of thinking that produces unnecessary complexity. A developer has the impression that he must design perfectly defined types that can handle all possible scenarios before programming can begin.This results in the common tendency of unexperienced programmers of trying to solve “world hunger” with every project. Many times he doesn’t even remember what feature he is trying to implement.

However, even a developer with an idea of the desired features can face a complex project. At the beginning of a project, each class is well defined and exists in a small easy to read and understand file. However, each new feature introduces changes that can span multiple class files. Over time, these files grow bloated with unrelated code. Even more, as multiple developers modify those files for their own needs, they can contain multiple styles and design patterns that make the code for that class very difficult to understand. Feature dependencies can be strung throughout the code base. “You can’t change that because it will break this. Oh and users have become used to that bug so don’t fix it.”

The entire project becomes a dark and dangerous place where you might violate some unwritten law by changing something you weren’t supposed to change. Then, you make yourself an enemy of the whole team who has to waste hours trying to trace down all the unexpected bugs caused by your change. As stress builds, the team becomes more concerned with who to blame rather than developing good software together. Obviously, this does not contribute to a positive development team.

My observation is that the end result of traditional OOP is an exponential growth in the cost of adding each new feature to a project.

Software development should not be object-oriented. In fact it should not be oriented to anything to do with the code itself. Software development should be feature-oriented. Developers should always have in mind what feature they are contributing towards.

With the big picture of the target feature in mind, then a developer can use whatever code most simply implements that target feature.

(Keep in mind that there are “system features” which provide the foundation on which the “user features” are built.)

Thinking Differently about Code Organization

My experience with C# and it’s changes over the years eventually led me to a different way of organizing my code.

The introduction of partial classes to C# initially allowed a simple way to separate designer generated code from human code.

Then later, extension methods allowed me, being used to strictly defined traditional OOP types, to start thinking about types that could be extended at a later time.

Then, my exposure to javascript with its purely dynamic types made me realize even more possibilities about objects and type definitions and how they could be extended even at runtime as needed.

In the end, these experiences led me to organize my code differently than I had been taught or had ever done before. In my latest project, I unintentionally organized my code in what I will call feature-organized.

Towards Feature-Organized Code

As I was working on this project, I wanted a clean working space so that I could focus solely on a single feature at a time. Therefore, each time I started implementing a new feature, I added a new file to the project and kept all the code related to that feature in that file.

However, I was still working with the same set of objects and their classes. Still, I wanted to keep all the additions to the classes near to the features that needed to use those additions. For example, when I needed to add another property to a class, I wanted that property to be defined right there in the same file with the logic that used that property, instead of going to the file where the class was originally defined.

This was simple to do because of partial classes in C#. I was able to continue the definition of any classes in that feature file. I added whatever properties or methods needed to exist for that feature to work. Also included were classes that only had meaning for that specific feature and any processing for it.

This would have been possible in
many languages, not just C#. It would be simple to implement in any language that has a mechanism to allow open type definitions that span multiple files or any language that supports dynamic types. (It is also possible in languages with closed type definitions, but not as simple.)

What is Feature-Organized Code?

Feature-organized code is code that keeps all properties, methods, and any logic related to a single feature in the same place.

The source files are grouped together by features rather than by class definitions.

Instead of having everything about a class exist in its own file, the parts of the class are defined where they are needed.

Obviously, balance is needed. A good balance is to define the primary set of classes in the “core feature”. These core definitions would basically represent the data of each class by implementing a constructor to create the object and fields or properties to hold the data for that object. Then in the feature files, these core definitions can be extended with calculated properties and methods needed for that feature.

The core feature is concerned about the construction of objects. Other features are concerned about using those objects.

The code to implement each feature could be organized as a single file or a collection of closely related files:

  • One file (Simple Features that require only a few types)
  • Multiple files with a similar name prefix (In a project with a small number of features)
  • A sub folder (In a medium project with complex features that only work when many objects coordinate)
  • A sub folder for each app layer with similar naming (In a large project that includes multiple layers that each exist in their own projects: i.e. a Data Access Layer, Business Logic Layer, Application Layer, Unit Testing, etc.)

Advantages of Feature-Organized Code

  • Singular Focus

The developer sees only code that is relevant to a single feature in each file. He can focus entirely on that one feature without thinking about the concerns of other features.

  • Predictable Locating of Feature Code

Each feature has its own specific place among the many files of the project. Everything about that feature’s implementation can be found in those files.

  • Clear Dependencies

Every class and the properties and methods that are needed to implement a feature are all together in the same place. This provides a complete picture of all dependencies required for that feature.

Also, if a feature depends on another feature, that can be clearly indicated in a comment at the top of the file. I refer to these as parent and child features.

  • Isolation of Features

The implementation of each feature is isolated from other unrelated features. In many cases, it is even possible to remove an entire feature and all the changes it introduces to the class hierarchy simply by removing those files from the project. This can be done without affecting any other features (except it’s child features).

  • Rapid Orientation

Since all code for a single feature is together, a developer can quickly orient himself with the entire scope of a feature. There is no need for him to comprehend the entire class hierarchy of the entire project. He must only understand the properties and methods currently being used by that single feature. This is simple because their definition is right there with the code that uses them. He has no need to dig through multiple files of class definitions looking for the various properties and methods of concern, being distracted and overwhelmed by everything not currently relevant.

In addition, if each layer of the application is organized in a similar fashion he can quickly locate the business logic and data access relevant to that feature. In fact, if the data access layer is segmented in the same way, then he can also see what tables, columns, and other db objects are relevant to that feature.

In this way, a developer can quickly get a full picture of everything relevant to that feature from the database all the way to the user interface.

This is the key to development that does not scale up in cost as the project grows. Whether the project has 5 classes or 1,000, the developer can learn everything relevant by simply looking at the feature files. He doesn’t even need an IDE to help him randomly browse through the code, jumping to definitions that are scattered across thousands of files. He can simply read the files of that single feature to get a complete picture of it.

  • Isolated Development

One possibility that this provides is isolated development of features. Because all code for a feature must be in a certain location, developers can easily work on different features without conflicting with one another. Developers can make changes independently and quickly without worrying about who else might be affected by their work. Also, if a change is required beyond the current feature (like in a parent feature), they can communicate those needed changes to a senior developer who can then coordinate how to proceed.

  • Secure and Simple Outsourcing

For large projects, a sub project can be created which includes only a copy of the necessary features (the target feature and it’s parent features).

This sub project would greatly improve the performance of the outsourcer because it presents him with only the relevant code. This reduces the likelihood that he will make changes in the wrong location or be overwhelmed by the complexity of a large project.

It also improves security because the entire source code is no longer being passed on to a partially trusted party, nor is it necessary to allow him access to the team’s source control or other servers. This can be very important for large closed-source projects that have sensitive code.

Also, the outsourcer can send his changes in just by zipping up the subfolder for the target feature. This outside code can then be code reviewed just by reading that small group of files without even needing an IDE. (This could even be done on a smart phone through email where the developer in charge of the outsourcing can quickly provide feedback to the outsourcer.)

When the outsourced code reaches an acceptable point, it can easily be merged back into the main project simply by copying those files (likely into an isolated branch in source control). All affected files would be in one location and could easily be code reviewed and tested before being accepted into the main project.

If this were a common scenario, it would even be possible to make some build tools that automate the process of creating these isolated sub projects for a single feature. The build tool would need only a list of features (the target feature and its parent features). Then it could simply include those folders and produce a new project file with references to only those items.

This may even be the preferred means of development for the entire development team. It would greatly improve build times for large projects and would boost developer productivity by allowing them more freedom of control over their development environment. (They could work from home on their own machine for example.) This also introduces great training possibilities for new developers that would help them ease into familiarity with the main project without being overwhelmed by its large scope.

  • No Ownership of Code

Because every feature is isolated and relatively small in comparison to the entire project, it also remains as simple as possible.

This prevents ownership concerns in the development team. Each developer works on a feature until it reaches maturity, then he moves onto another feature. Another developer may revisit that feature at a later point to improve it. Because of rapid orientation, any of the developers should be able to work with any feature. There are no hidden dependencies that will remain the secret of the “owner” of that code. Everything relevant and every dependency is contained in one small set of files.

  • High Quality Features

It can be simple to ensure that each feature contains every component that ensures high quality: Documentation, conformance to code conventions, unit tests, code coverage, code contracts, a polished user interface to that feature, etc.

A feature would not be considered mature until it reaches the highest standards of your application. It can easily be excluded from release until it can meet those standards.

This compels the development team to complete fewer high quality features instead of many mediocre features.

This brings us back to feature-orientation. Again, an app is nothing more than a collection of features. This focus on high quality features produces apps that users enjoy and will outshine their competitors.

Conclusion

Feature-organization of code is a key to promoting a feature-oriented paradigm for software development.

I will be using this concept in my own development and will later come back with some practical tips on how best to implement feature-organization.