Monday, September 19, 2016

Is Light Speed Travel Possible?

Off topic for a software blog, but just a quick thought - if matter can be converted into energy and reconstituted back into the exact same form of matter again (all particles arranged as they were before), then yes...that would be one way of moving mass at light-speed. Ah...providing that the form of energy could be moved at light speed. Might take a while to convert matter to energy and back again though.


It's kind of like a chicken and egg problem isn't it? What converts the energy into matter? Wouldn't the transducer need to be in position at the destination first (e.g. light-years away)? Or perhaps there is some way to remotely manipulate particles from afar in such a way that they can be rearranged into a predefined arrangement. And only the energy would be sent across. Guess that would be more or less making a copy of the matter rather than actually moving it via its energy form. Guess that would be more or less cloning matter than transporting it.


Bah, can't help myself, let's do a software thing...they say that a 64-bit number (a long) can hold a number bigger than all the atoms in the Universe times 200. And we can store those numbers all day long. How many numbers, bits, etc...would it take to store all the atoms in a frog? Assuming we had the necessary transducers to take in a signal and reconstruct the matter, what would it take to beam up a frog Scotty? Assume also that we can digitally encode matter and send it to the other side. Hint: Apply problem solving technique from last post to solve this problem.


Someone should make a GitHub project out of this...oh wait, I'm on it...here it is.

How to solve any problem in 5 steps.


How to solve any problem in 5 steps...


1. Understand the problem...for real. This means establishing criteria for success as well.
2. Use understanding of problem to search for solutions which are acceptable to those impacted.
3. Weigh solutions to find optimal candidates. This step may involve testing feasibility.
4. Implement optimal solution.
5. Measure for success.
if success, end; Else, repeat.


All steps depend on previous steps. Therefore, the most critical step is step 1. Step 4 may be somewhat arbitrary depending on the outcome of step 3. That is to say, there may not be an optimal solution per se, but a number of similar pseudo-optimal solutions - in that case pick one or start more than one and let the best emerge (if possible). After step 3, there may be no feasible solution. In that case end.

Why Learn Lisps?

In learning Clojure, I came across this fantastic series of tutorials https://aphyr.com/posts/305-clojure-from-the-ground-up-macros


This specific topic on macros explains a bit about expanded function trees and how macros in Clojure and some other Lisps can be used to take advantage of those to rewrite code. This evoked a memory of an MIT OCW class video I watched years ago which was taught using the Scheme flavor of LISP. Since then, MIT switched to Python. But this made me think about how different it must be to start by learning to program by directly writing expression trees. Isn't this what compilers create when they compile languages like Java, C, C#, etc? So...learning LISP languages like Clojure will give you a sense of what is actually happening when those languages are compiled and will give you a base to write a compiler (if you haven't developed those things already). With so many CS programs centered on pragmatism these days (e.g. programming) it may be worthwhile for many CS grads and self-learners to pick up some Clojure with this perspective in mind.

Thursday, September 8, 2016

What We Can Learn About Generalization From Clojure

Clojure is a functional programming language based on LISP (List Programming) but built on top of Java runtime. There's a version built on JavaScript as well. A great resource for learning Clojure can be found here.


The link to the Clojure resource goes to the third part of the lessons because I want to highlight how that particular walkthrough goes from something specific, to something more general when it comes to the recursive function. It basically builds a map function from the ground up.


Hold that thought in mind and think about how software is built and how it can generally be improved over time. Software is "soft", it is malleable. It is meant to be changed so that we humans can take something that is so general "hardware" and plug in some instructions, in a language that we understand, for the generic computer to perform specific operations.


Generalizing functions/methods is the holy-grail of software. Big idea, I know. But here's why I say that - when we generalize functions they become reusable. When they are reusable, we don't have to keep rewriting them. If we never strove for re-use, everything could be hardware - hardcoded ICs that were task specific. And we wouldn't have very good technology at all. That was the state at one time. But since we have Software, we have re-use.


But, we can't generalize world. We wouldn't choose to use different languages if we could generalize indefinitely. Or the generalization would be so unwieldy that it would become a language of its own. But isn't that what a software language is? An ultimate generalization? Point is, there's a limit to generalization. It's situational most times and depends on the use cases.

Tuesday, September 6, 2016

Coordinator Pattern for Distributed Enterprise Applications

When building software for the enterprise, particularly where data access to one or more source will be shared across multiple application domains, and where distribution and reuse through distribution is a virtue, it would be helpful to use a Coordinator as the authority for the unit of work. A Coordinator is a service which accepts a single incoming command and issues one or many commands to resources (data access services). The resources are separated into read and write resources for each source. The read resources may be used by the presentation layer or the Coordination Layer, but the write layer (which specializes in executing commands) may only be used by the Coordination Layer.
A Coordinator is similar to a Unit of Work pattern. The key difference is that it must be hosted as a service and use data access as services.
The Coordinator owns the transaction. It must begin and either rollback or commit the transaction. The transaction may be distributed across many data sources (throughout the writer resources). The writer resources must be designed in a way that handles concurrency. For example, if using an ORM you may need to load then update or insert any specific data entities involved in the transaction. These writer resources should handle the entire transaction for the data source which it wraps (typically one database). The Coordinator should send commands to any resources and raise any events necessary upon executing the commands.


The Coordinator should listen for events (as in event based architecture) and respond to those events by issuing those specific commands to the resources which it consumes. Those resources may be shared amongst other Coordinators. Resource commands should be synchronous so that transactions can be used appropriately. The Coordinator is an architectural pattern which can be used to implement a Unit of Work pattern in a distributed, service-based architecture. Coordinators should be consumed asynchronously using event-driven mechanisms. Coordinators should respond to events raised by other Coordinators or by users via the presentation layer.


Through the use of Coordinators in conjunction with the other patterns expressed in this article, a distributed system can be built such that it is loosely-coupled, highly-cohesive and easily-maintainable. The system will be loosely-coupled in that the communication between Coordinators occurs via events. Zero, one, or many Coordinators can respond to an event. Any Coordinator can be changed and (if hosted independently) redeployed without impacting any other Coordinator. The level of cohesiveness depends on the scope built into each Coordinator. Each Coordinator could handle one Unit Of Work. In this case each one would have a single service method exposed. This method may perhaps share a contract and differ only by location. In this case the method definition would need to describe an event and the Coordinator would need to load the Event details it needs via resources. In this case, an Event Store would be needed. Else, the payload would be a serialized packet of all of the data relevant to the event. This is a deeper topic and will require more in-depth exploration in order to weigh in completely on the interface mechanisms involved. The maintainability is affected by the number of moving pieces, but also by having a standard mechanism of control over the various state changes involved in a complex system.