A simple technology evaluation framework
Generalizing from my prior post on evaluating blockchain applications, here is a simple framework for evaluating any technology, library, framework, language, etc.
What problem does the new technology claim to solve?
Start with basic research. What do the creators claim as the benefits of the thing? If it’s a library or framework, the claims are often easy to find. For a broader technology, this could take a little digging.
Sometimes people will clearly state that they don’t know what something could be used for. This is perfectly fine – it’s great, even. That just means we get to collectively brainstorm possibilities (ideally informed by in-context user data).
If something promises to be the greatest, but doesn’t make any specific claims about how or why, however, that’s a warning sign.
Is the claimed problem an actual problem?
Macro technology trends fall victim to this test, too. “It’s like [x], only with an app” or (per my prior post) “it’s like [x], only on the blockchain” are good warning signs that you’re dealing with a solution in search of a problem.
Do existing approaches not already solve the problem?
Is the new technology necessary? There’s something to be said for the power of discovery to spark your imagination and fuel your excitement for a better future, but in some cases the future is already here. Is there something more established that can solve the problem (or get you most of the way there)? If a complete solution isn’t relevant to your scenario, established technologies are often more stable and broadly supported, even if they don’t do everything you want.
From ride share services that take multiple passengers and drive along fixed routes (i.e., “buses”) to apps that let you pay for goods that are automatically dispensed from a conveniently located machine with no humans required (i.e., “vending machines”), startups are easy targets to spot failing this test.
Is the new technology actual effective at solving the problem?
Does the new technology actually work? Again, it’s ok if it doesn’t yet, as long as no one is claiming that it does. If something is truly bleeding edge, sufficient efficacy data might be lacking. These are good cases for spikes, prototyping, and general experimentation.
Are the costs incurred worth it?
As technologists, we’re usually confident in our ability to identify the direct costs of something new – are you trading memory usage for speed? Are you trading generality for overall performance? But it’s important to also assess indirect, long term, and especially human costs. Does the new technology require the entire department to be retrained? How easily can we get help if we run into problems? Does the vendor look like they’ll be around to support the product in 10 years while we are still stuck with it because enterprises often move glacially? If my internet-connected pet feeder company goes bust, how will my animals get fed?
This list can get really long and veer into interesting philosophical, moral, and ethical territory – all of which are important, but too difficult to deal with in blog-length posts. A few teaser questions – What are the potential negative consequences of this technology being widespread? Are existing legal and regulatory frameworks and agencies equipped to support an acceptable level of safe use or a mechanism of redress in the event something goes wrong?
It’s also worth thinking about unintended consequences. “I sure wish my light switches required a wifi connection so that they’d randomly fail sometimes for inexplicable reasons” is not a desire that people have, but the push towards IoT in an environment of cheap products makes this an expected outcome.
Whether you start from this simple list or dig into deeper levels of detail and more nuanced evaluation, the end goal should be the same – make intentional choices about what you choose to pursue and adopt. Though that’s no guarantee that you’ll be able to get complete information or have perfect judgment, acting intentionally permits you to retrospectively refine your meta-process, so that you can find the gaps and prevent similar errors in the future.