Monday, September 19, 2016

Is Light Speed Travel Possible?

Off topic for a software blog, but just a quick thought - if matter can be converted into energy and reconstituted back into the exact same form of matter again (all particles arranged as they were before), then yes...that would be one way of moving mass at light-speed. Ah...providing that the form of energy could be moved at light speed. Might take a while to convert matter to energy and back again though.


It's kind of like a chicken and egg problem isn't it? What converts the energy into matter? Wouldn't the transducer need to be in position at the destination first (e.g. light-years away)? Or perhaps there is some way to remotely manipulate particles from afar in such a way that they can be rearranged into a predefined arrangement. And only the energy would be sent across. Guess that would be more or less making a copy of the matter rather than actually moving it via its energy form. Guess that would be more or less cloning matter than transporting it.


Bah, can't help myself, let's do a software thing...they say that a 64-bit number (a long) can hold a number bigger than all the atoms in the Universe times 200. And we can store those numbers all day long. How many numbers, bits, etc...would it take to store all the atoms in a frog? Assuming we had the necessary transducers to take in a signal and reconstruct the matter, what would it take to beam up a frog Scotty? Assume also that we can digitally encode matter and send it to the other side. Hint: Apply problem solving technique from last post to solve this problem.


Someone should make a GitHub project out of this...oh wait, I'm on it...here it is.

How to solve any problem in 5 steps.


How to solve any problem in 5 steps...


1. Understand the problem...for real. This means establishing criteria for success as well.
2. Use understanding of problem to search for solutions which are acceptable to those impacted.
3. Weigh solutions to find optimal candidates. This step may involve testing feasibility.
4. Implement optimal solution.
5. Measure for success.
if success, end; Else, repeat.


All steps depend on previous steps. Therefore, the most critical step is step 1. Step 4 may be somewhat arbitrary depending on the outcome of step 3. That is to say, there may not be an optimal solution per se, but a number of similar pseudo-optimal solutions - in that case pick one or start more than one and let the best emerge (if possible). After step 3, there may be no feasible solution. In that case end.

Why Learn Lisps?

In learning Clojure, I came across this fantastic series of tutorials https://aphyr.com/posts/305-clojure-from-the-ground-up-macros


This specific topic on macros explains a bit about expanded function trees and how macros in Clojure and some other Lisps can be used to take advantage of those to rewrite code. This evoked a memory of an MIT OCW class video I watched years ago which was taught using the Scheme flavor of LISP. Since then, MIT switched to Python. But this made me think about how different it must be to start by learning to program by directly writing expression trees. Isn't this what compilers create when they compile languages like Java, C, C#, etc? So...learning LISP languages like Clojure will give you a sense of what is actually happening when those languages are compiled and will give you a base to write a compiler (if you haven't developed those things already). With so many CS programs centered on pragmatism these days (e.g. programming) it may be worthwhile for many CS grads and self-learners to pick up some Clojure with this perspective in mind.

Thursday, September 8, 2016

What We Can Learn About Generalization From Clojure

Clojure is a functional programming language based on LISP (List Programming) but built on top of Java runtime. There's a version built on JavaScript as well. A great resource for learning Clojure can be found here.


The link to the Clojure resource goes to the third part of the lessons because I want to highlight how that particular walkthrough goes from something specific, to something more general when it comes to the recursive function. It basically builds a map function from the ground up.


Hold that thought in mind and think about how software is built and how it can generally be improved over time. Software is "soft", it is malleable. It is meant to be changed so that we humans can take something that is so general "hardware" and plug in some instructions, in a language that we understand, for the generic computer to perform specific operations.


Generalizing functions/methods is the holy-grail of software. Big idea, I know. But here's why I say that - when we generalize functions they become reusable. When they are reusable, we don't have to keep rewriting them. If we never strove for re-use, everything could be hardware - hardcoded ICs that were task specific. And we wouldn't have very good technology at all. That was the state at one time. But since we have Software, we have re-use.


But, we can't generalize world. We wouldn't choose to use different languages if we could generalize indefinitely. Or the generalization would be so unwieldy that it would become a language of its own. But isn't that what a software language is? An ultimate generalization? Point is, there's a limit to generalization. It's situational most times and depends on the use cases.

Tuesday, September 6, 2016

Coordinator Pattern for Distributed Enterprise Applications

When building software for the enterprise, particularly where data access to one or more source will be shared across multiple application domains, and where distribution and reuse through distribution is a virtue, it would be helpful to use a Coordinator as the authority for the unit of work. A Coordinator is a service which accepts a single incoming command and issues one or many commands to resources (data access services). The resources are separated into read and write resources for each source. The read resources may be used by the presentation layer or the Coordination Layer, but the write layer (which specializes in executing commands) may only be used by the Coordination Layer.
A Coordinator is similar to a Unit of Work pattern. The key difference is that it must be hosted as a service and use data access as services.
The Coordinator owns the transaction. It must begin and either rollback or commit the transaction. The transaction may be distributed across many data sources (throughout the writer resources). The writer resources must be designed in a way that handles concurrency. For example, if using an ORM you may need to load then update or insert any specific data entities involved in the transaction. These writer resources should handle the entire transaction for the data source which it wraps (typically one database). The Coordinator should send commands to any resources and raise any events necessary upon executing the commands.


The Coordinator should listen for events (as in event based architecture) and respond to those events by issuing those specific commands to the resources which it consumes. Those resources may be shared amongst other Coordinators. Resource commands should be synchronous so that transactions can be used appropriately. The Coordinator is an architectural pattern which can be used to implement a Unit of Work pattern in a distributed, service-based architecture. Coordinators should be consumed asynchronously using event-driven mechanisms. Coordinators should respond to events raised by other Coordinators or by users via the presentation layer.


Through the use of Coordinators in conjunction with the other patterns expressed in this article, a distributed system can be built such that it is loosely-coupled, highly-cohesive and easily-maintainable. The system will be loosely-coupled in that the communication between Coordinators occurs via events. Zero, one, or many Coordinators can respond to an event. Any Coordinator can be changed and (if hosted independently) redeployed without impacting any other Coordinator. The level of cohesiveness depends on the scope built into each Coordinator. Each Coordinator could handle one Unit Of Work. In this case each one would have a single service method exposed. This method may perhaps share a contract and differ only by location. In this case the method definition would need to describe an event and the Coordinator would need to load the Event details it needs via resources. In this case, an Event Store would be needed. Else, the payload would be a serialized packet of all of the data relevant to the event. This is a deeper topic and will require more in-depth exploration in order to weigh in completely on the interface mechanisms involved. The maintainability is affected by the number of moving pieces, but also by having a standard mechanism of control over the various state changes involved in a complex system.

Tuesday, August 23, 2016

Thinking About Dynamic Forms Using Angular pt 2

In the last post I started thinking about some of the problems we encounter with using forms for inputting data into a system. There may be other ways to input data and a well-designed system will be capable of accepting input from other sources. However, the focus of this series is on using forms - specifically via a web browser.


Since a well-designed system will accept input in a platform agnostic way, this leaves us open to design the user interface in a number of ways. For purposes of this thought experiment I'm going to assume we have the capabilities in place to accept input into the system in some standard way - perhaps through a service or some RESTful api for example. It doesn't matter so long as we can stay focused on collecting data from the user and sending that data into the system.


One problem we may have is incremental changes. Users have experienced automatic saving and it's no longer a novelty - we can and should do this in order to meet user expectations.


Another problem we may face is logging changes to data. Additionally, we may face changes to the form itself. Let us also take into consideration a form in which a user can add fields as needed.


From what I've gathered so far, we can apply two commonly used technologies to this set of problems and cone up with a solution that meets these needs. Here's what I have in mind:


Using Angular (or even your favorite SPA technology) and a data object with this interface: { template, data, metadata } we can store each form instance in a document database and load it in one go.


The template property holds several bits of info: the template text, version, source, form name, id, ... , anything else? }


The data holds just that - it's complete dynamic and goes with the template.


Metadata holds things like userId, action, date, ip, client device info, whatever else is relevant.


Any action the client takes - save, submit, delete...are saved as a new version of the document. Some document databases have this built-in. I'm thinking of one in particular which can even save files (js, html template, etc) along with the data.


In Angular 1 we can use directives to enable the forms. Dynamic directive creation should be achievable since its all js. All we really have to do is create components or directives server side and serve up the js and template. The template is either assigns to template or templateUrl of the directive/component. I've become accustomed to components so I'll prefer those with '=' (two way binding) which allows them to update objects in parent scope which are linked.


The data in the template would be interpolated using a standard controllerAs: 'vm' so that all data can be accessed in the templates via vm.property.


That leaves behaviors...not sure about these yet. I'm going to think a bit about this one...until next time it's a good place to start.

Thinking About How to Build Dynamic Forms with Angular

There are many times when we need to use forms to collect user data. Make no mistake, this is not where the app begins or ends, but only a small part of an app. There are several platforms, frameworks, etc that we have to create forms. The "classical" approach is to present a form which handles CRUD operations on a database, updating the tables directly. In some cases this may be an adequate approach, but in time it falls short of an optimal solution in many applications. There are several reasons for the shortcomings and some inelegant solutions have been used to make up for them.


Often the form requirements can change over time. In the classical style, a developer makes some changes to the form and the database schema - sometimes altering existing data in the process by dropping or changing columns. The new Form is deployed and going forward we have our new data. But what about the historical data? Perhaps an archive is used, perhaps an additional database and app version. Three versions later, a monolithic app can get messy fast when supporting multiple versions. This is inelegant solution #1.


Sometimes it is beneficial to have dynamic forms, several CMS solutions exist for such things. Not being overly familiar with the current CMS landscape, I'm certain my proposed Angular solution will be fitting of the CMS domain, however I will continue regardless since I believe we can one-up a CMS solution by enabling the user to add fields at their own discretion rather than admin only. Potential inelegant solution #2 - locking part of the solution into a specific platform.


Pre-emptive rebuttal to using Angular platform and locking into that: I propose a more robust solution with no specific template engine, Angular is a fitting engine at the moment since the dynamic nature of JavaScript lends itself to generating arbitrary forms. Plus the benefits of a document database backing will be clear once more details are presented. But enough about solutioning for now...lets continue defining the problem next post...I'm outta time ATM.