Skip to main content

Posts tagged with 'architecture'

Patrick Smacchia is building NDepend to make refactoring and technical debt decisions easier.

Show Notes:

NDepend is on Twitter.

Want to be on the next episode? You can! All you need is the willingness to talk about something technical.

Theme music is "Crosscutting Concerns" by The Dirty Truckers, check out their music on Amazon or iTunes.

RabbitMQ is a "message broker". Your program gives it messages. Other programs can come along and retrieve those messages and then do something with them. It sits in the middle, so it's often referred to as "middleware".

Why would you want a middleman? Haven't a lifetime of mattress commercials shown that cutting out the middleman is always better?

Not always. Let's build an online store that takes orders and has to tell a warehouse to process the order. Consider these three designs:

  1. Your website does the work itself.
  2. Your website passes a message to another program directly.
  3. Your website passes a message to a broker. Another program picks up messages from the broker.

These are the questions I asked, and below are the conclusions I've come to. If I'm missing anything, please add to the discussion in the comments.

#1 Your website does the work itself

A customer places an order. Since your website is doing all the work itself, it has to: email the customer, charge a credit card, and tell the warehouse to ship the item. What if the warehouse doesn't have the item? Now you have to check another warehouse. While your website is doing this, it has fewer resources available to process other customer's browsing and ordering. So it could become a performance problem as well as a complexity problem. Instead...

#2 Your website passes a message to another program directly.

Let's just tell a warehouse program to do the work, and the website will go about its business. The warehouse program can figure out which warehouse, and send the appropriate information. If it has a web service, we can just push the information. But what if something goes wrong? What if the service is down, or busy, or overloaded? Now we need to build in a retrying mechanism into our website, and we're still managing complexity and putting strain on the website. But what if...

#3 Your website passes a message to a broker. Another program picks up messages from the broker.

Our website just records all the information that the warehouse needs to a message. That message goes on the broker and waits. Once the website dumps the message, it's done, and can go about serving other customers. The warehouse program can ask the broker for messages. It gets a message, does the processing. If it goes well, then the broker can forget the message. If something goes wrong, it's up to the warehouse program to figure out what to do. It could keep retrying, send an email, call a web service, or whatever else you need. If the warehouse program gets overloaded, you can spin up another warehouse program that talks to the same broker. If the warehouse programs crash, the messages will wait on the broker.

It might seem that the middleman's job is very simple, it's very important to ensure that complex operations can be broken down and processed smoothly.

Welcome to the latest installment of the Brief Bio series, where I'm writing up very informal biographies about major figures in the history of computers. Please take a look at the Brief Bio archive, and feel free to leave corrections and omissions in the comments.

John von Neumann

Since Ada Lovelace's death, there's a pretty big lull in notable computer-related activity. World War II is the main catalyst for significant research and discovery, so that's why I'm skipping ahead to figures involved in that period. If you think there's someone worth mentioning that I've skipped over, please do so in the comments.

But now, I'm skipping ahead about a half of a century to 1903, which is when John von Neumann was born. He was born in Budapest, in the Austro-Hungarian Empire to wealthy parents. His father was a banker, and John Von Neumann's precociousness, especially in mathematics can be at least partially attributed this to his father's profession. Von Neumann tore through the educational system, and received a Ph.D. in mathematics when he was 22 years old.

Like Pascal, Leibniz, and Babbage, Von Neumann contributed to a wide variety of knowledge areas outside computing. Some of the high points of Von Neumann's work and contributions include:

He also worked on the Manhattan Project, and thus helped to end World War II in the Pacific. He was present for the very first atomic bomb test. After the war, he went on to work on hydrogen bombs and ICBMs. He applied game theory to nuclear war, and is credited with the strategy of Mutually Assured Destruction.

His work on the hyrdogen bomb is where Von Neumman enteres the picture as a founding father of computing. He introduced the stored-program concept, which would come to be known as the Von Neumann architecture.

Von Neumann became Commissioner of the United States Atomic Energy Program. Here's a video of Von Neumann, while in that role, advocating for more training and education in computing.

In 1955, Von Neumann was diagnosed with some form of cancer, possibly related to his exposure to radiation at the first nuclear test. Von Neumann died in 1957, and is buried in Princeton Cemetary (New Jersey).

I encourage you to read more about him in John Von Neumann: The Scientific Genius who Pionered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More (which looks to be entirely accessible on Google Books).

AOP is best used for cross-cutting concerns, which are often non-functional requirements. But what's the difference between functional requirements and non-functional requirements?

A functional requirement is a requirement about what an application should do. This is manifested in a form that the user can easily observe. Business rules, UI logic, etc.

A non-functional requirement is a requirement about how an application should work. This is stuff that a user typically doesn't see (and probably doesn't care about). Logging, caching, threading, etc.

So here's a quick exercise for you:

Which of these are functional and which are non-functional? Why?

  1. Addresses should be US-only, ZIP Codes must be five digits
  2. Every method call should log its own name and arguments to a text file.
  3. When returning a list of Customers, only Customers the current user is authorized to see should be returned.
  4. When submitting a new Customer, it must be put in a pending queue for approval by an administrator.

I'll post my answers in a later post, but feel free to leave your answers in a comment.

I came across an abstract and slides (PDF) about using AOP to detect code smells. It got me thinking: is clean code and SOLID architecture itself a cross-cutting concern? Usually "good, maintainable code" isn't ever written down explicitly as a requirement (functional or otherwise); it's just sorta assumed that developers will write the best code they can. Obviously that doesn't always happen, and the customer probably won't know one way or the other until after release.

As a developer, sometimes it's hard for me to be objective when looking at my code and the choices that I've made. Pair programming is  one way to help alleviate this: I can get instant feedback from another developer as I'm coding and making decisions. Test driven development also helps, by forcing me to write code that's easy to test (and therefore loosely coupled). Not every project or code base has the luxury of either of these things: maybe there's only 1 developer, or maybe it's a legacy code base. Whatever the reason, another approach to take is code analysis: code metrics like cyclomatic complexity and maintainability index. There's also heuristics, aka "code smells" that (not always, but usually) indicate that there might be a problem.

There are three code smells addressed by the Juliana Padilha's slides, none of which I've heard before:

  • Divergent change: this sounds like the opposite of Single Responsibility. I.e. the class has more than one reason to change, and thus its responsibility is diverging.
  • Shotgun surgery: I've not heard this term, but I've certainly seen it (and been guilty of it myself). Making a change requires touching a handful of different classes instead of just one or two.
  • God class: I actually have heard of this, and if you consider classes with 300+ line Page_Load methods in ASP.NET to be God classes, then I've certainly seen it and done it.

The metrics that she uses to find these smells are not traditional metrics, but "concern-driven" metrics, meant to identify code "scattering" and "tangling" (i.e. the code that AOP is meant to help refactor), and includes:

  • Concern Diffusion over Class (CDC)
  • Concern Diffusion over Operation (CDO)
  • Number Concerns per Class (NCC)
  • Concern Diffusion over Lines of Code (CDLOC)

These metrics weren't defined in the slides, but I found them in another white paper from Columbia.

Matthew D. Groves

About the Author

Matthew D. Groves lives in Central Ohio. He works remotely, loves to code, and is a Microsoft MVP.

Latest Comments

Twitter