=

desalasworks presents:

a selection of works by steven de salas

Asynchronous Neural Networks in JavaScript

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-1865 dw-article type-dw-article status-publish hentry category-code category-iot tag-javascript tag-machine-learning tag-robotics">

For the past few months I’ve been trying to get some of my robots to roam around the house with a mind of their own, learning as they go. It turns out that the task is harder than I thought.

The main issue is paralell signal processing.

To illustrate: a single robot will have usually 3-5 sensors that collect information on the world around it. I’m particularly interested in ambient light as i want it to be able to recharge its batteries with a mini solar panel if it needs to, but i also have some useful options for collision detection, orientation and WiFi triangulation.

First attempt, enter the matrix

I originally attempted a weighted relational matrix of inputs to outputs. Like a chequers board where each square starts off with one piece that represents the strength of the relationship between an input and an output. To simulate learning, I would stack extra pieces on a square, for example to reinforce past choices, making it more likely to get picked in subsequent iterations. I called the engine fusspot, which I thought was a suitable name given that this algorithm could be trained but in the end, choices were always fuzzy and random.

chequers

This worked, in a fashion. I could train the engine by modifying the strength of each relationship, making subsequent choices more likely to be biased by training, but it was a terrible solution when trying to implement it in practice.

The main problem is that this solution does not take context into account, and it turns out context is everything – you can’t rely on a single input channel (such as IR sensor) to determine the choices a robot can make (stop, turn, keep going), it takes a wholistic approach that integrates data from all the sensors at the same time.

Context is everything. You can’t rely on a single input channel, it takes a wholistic approach that integrates data from all the sensors at the same time.

I got to a point where trying to break down the problem further wasn’t getting me anywhere. So I decided to look around and see what other people had done.

Video games and artificial gardening.

By this point I had done a bit of reading on Artificial Neural Networks and decided that this was roughly where I was heading.

This led me into the world of video games. I’ve played plenty of games myself and know that game developers are at the leading edge of AI.

The wikipedia article inspired me to look a bit more into a game called Creatures written by a guy called Steve Grand. Steve dropped off the face of the earth a few years ago through a self-inflicted kickstarter project gone wrong.

But before he did so, he wrote an excellent book called Creation, Life and how to make it, where he details some of the techniques he used, as well a zen-like philosophical outlook on artificial life powerful enough to blow my socks off and turn upside down all my notions of what AI is and should be about:

All life, both biological and artificial, are series of emergent patterns that persist longer than others, in other words, certain patterns of interaction between components (cells usually, though these are also made of patterns of interaction between cellular components and so on to the atomic level) happen to be more successful in their environment and thus persist through time.

The beauty here is that life is a set of emergent patterns, rather than the components of which its made of, or the behaviour that arises from their interaction. The best humans can strive for is creating the conditions that would allow such patterns to emerge, for example by defining the underlying components and encouraging certain patterns of behaviour over others. Think of it as artificial gardening..

The beauty here is that life is a set of emergent patterns, rather than the components of which its made of, or the behaviour that arises from their interaction.

Neural Networks, Rebooted.

So. I started modelling a neural network in JavaScript.

Its not hard. Just an array of nodes, each containing another array of links to other nodes. With the added catch that each link to another node has a weight between 0 and 1, which determines if the onward connection should be fired during a chain-reaction event. My first commit was 100 lines long. 60 if you remove comments and whitespacing.

Since I was trying to accurately model the biological system in our own brains (rather than a mathematical abstraction), I noticed my own gut telling me to add a slight delay between firing neurons. It seems natural really, it generally takes a whole millisecond to fire off the next neuron inside our brains, so the signal is never going to travel all the way across the brain instantly.

This solved my problem rather neatly. I had several input streams from each sensor that needed to be integrated, and I needed something that could be trained on a particular input, while taking into account all the other inputs, or even a changing set of inputs. Something that could behave more like a normal brain and create an intricately timed set of reactions to an input (think of the muscles involved in picking up your phone), or integrate an input with itself (by having a signal that loops around).

I noticed my own gut telling me to add a slight delay between firing neurons. It seems natural really … the signal its never going to travel all the way across the brain instantly.

I also researched the interleaving of signal patterns in real neurons and came up with a spiking model that combined multiple upstream signals before firing a signal downstream.

Chaos and the game of life

This was all confusing as hell so I needed to visualize my network in order to understand what it was doing.

Thankfully JavaScript is a great language for that, there are tons of libraries out there for visualization and thanks to V8 and the NodeJS ecosystem you can run the same code on the browser as you would a command line executable. I used browserify to require my modules on a browser page and viz.js to perform visualizations using HTML5 <canvas/>.

The result was intriguing. I noticed all sorts of patterns, travelling waves, and oscillations taking place when a neuron was fired. It was utter chaos but it was also mesmerizing.

The result was intriguing. It was utter chaos, but it was also mesmerizing.

I felt I was on to something. After all electrical wave patterns are an inherent part of the human brain. So why do prevailing models of neural networks ignore them?

It turns out that I was building something called a Spiking Neural Network, the so-called 3rd generation of neural networks that more closely emulate their biological cousins, and which encode information in patterns of signals.

I tried different network shapes, a ball, a sausage and a dounught. All produced interesting variations. Jonathan, a colleague at work, said it reminded him of Conway’s Game of Life. I had to agree with him.

Now in 3D

It took some extra modelling to come up with a good visualization that would let me understand what was going on, my original 2D library could only visualize around 300 neurons before falling over, using 3D I could increase that to a few thousands.

I called the newly born library botbrains, and used a combination of a 3D Force Graph in WebGL for rendering the network, WebSockets for client/server interaction, and node.js for running the neural network on the back-end so it could integrate via USB Serial to the Arduino microcontroller running my motors and collecting data from sensors.

Machine Learning, for signal patterns.

Next challenge came from learning. I had to find a way to make the robot marginally smarter over time. After all its not very useful if your robot keeps bumping into the wall.

Thankfully, plenty of people had already worked out the answer. Or rather two answers, really. You either use reinforcement or back-propagation. Both good approaches, but since my network was asynchronous, working out the loss-function for back-propagation was a real headache (I had to backtrack in time taking into account competing signals) and besides I didn’t want to depend on layers as I wanted all the richness of signal patterns that different shapes of neural network could create.

The biological answer here was reinforcement, aka “Long Term Potentiation“. So I went with that. Later I found out that AlphaZero, the current AI chess champion used reinforcement too so that made me feel much better about my choice.

So, how to implement reinforcement learning?

Well, its not so hard really, though it took several iterations. The basic premise was to reinforce synapses that fired recently whenever the robot does something good (like avoid an obstacle), and to weaken synapses that fired recently whenever something bad happens.

Although pretty close, this wasn’t enough on its own, as the network would either overload or fail to connect altogether, so after a fair bit of testing I worked out that the most elegant model was to maintain the network weight constant by spreading out any positive or negative changes over the remaining unused synapses, so effectively during learning I would strengthen ‘successful’ connections while weakening all the other ones, reversing the process for negative reinforcement.

I also added a tendency for synapse weights to decay towards their starting weight with each learning iteration, as well as strengthening unused synapses randomly whenever negative reinforcement is applied (ie, sometimes the solution to the problem can be found by using neural pathways that have never been used before).

Long-term memory was a problem, and still is to this day though its getting better.

I added the ability for synapses to contain both ‘short-term’ and ‘long-term’ weights, both of which are influenced by the other.  I pinched this idea from Steve Grand, which I found an incredibly simple and powerful solution to long-term memory.

Now all I had to do was to plug my robot into it.

 

Immutability in JavaScript, A Contrarian View

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-1929 dw-article type-dw-article status-publish hentry category-code category-ui tag-javascript tag-react tag-redux">

TL/DR: Immutability is more a fashion trend than a necessity in JavaScript. If you are using React it does provide a neat work-around to some confusing design choices in state management. However in most other situations it wont add enough value over the complexity it introduces, serving more to pad up a resume than to fulfill an actual client need.

I wrote this originally as a answer to a stack overflow question posted by someone who was as baffled by this JavaScript programming trend as I was.

Here is my answer to this topic, it is contrarian and may also be contentious, but bear in mind its not the only answer, its just my own point of view.

Why is immutability so important(or needed) in javascript?

Well, I’m glad you asked!

Some time ago a very talented guy called Dan Abramov wrote a javascript state management library called Redux which uses pure functions and immutability. He also made some really cool videos that made the idea really easy to understand (and sell).

The timing was perfect. The novelty of Angular was fading, and JavaScript world was ready to fixate on the latest thing that had the right degree of cool, and this library was not only innovative but slotted in perfectly with React which was being peddled by another Silicon Valley powerhouse.

Sad as it may be, fashions rule in the world of JavaScript. Now Abramov is being hailed as a demigod and all us mere mortals have to subject ourselves to the Dao of Immutability… Wether it makes sense or not.

What is wrong in mutating objects?

Nothing!

In fact programmers have been mutating objects for er… as long as there has been objects to mutate. 50+ years of application development in other words.

And why complicate things? When you have object cat and it dies, do you really need a second cat to track the change? Most people would just say cat.isDead = true and be done with it.

Doesn’t (mutating objects) make things simple?

YES! .. Of course it does!

Specially in JavaScript, which in practice is most useful used for rendering a view of some state that is maintained elsewhere (like in a database).

What if I have a new News object that has to be updated? … How do I achieve in this case? Delete the store & recreate it? Isn’t adding an object to the array a less expensive operation?

Well, you can go the traditional approach and update the News object, so your in-memory representation of that object changes (and the view displayed to the user, or so one would hope)…

Or alternatively…

You can try the sexy FP/Immutability approach and add your changes to the News object to an array tracking every historical change so you can then iterate through the array and figure out what the correct state representation should be (phew!).

I am trying to learn what’s right here. Please do enlighten me 🙂

Fashions come and go buddy. There are many ways to skin a cat.

I am sorry that you have to bear the confusion of a constantly changing set of programming paradigms. But hey, WELCOME TO THE CLUB!!

Now a couple of important points to remember with regards to Immutability, and you’ll get these thrown at you with the feverish intensity that only naivety can muster.

1) Immutability is awesome for avoiding race conditions in multi-threaded environments.

Multi-threaded environments (like C++, Java and C#) are guilty of the practice of locking objects when more than one thread wants to change them. This is bad for performance, but better than the alternative of data corruption. And yet not as good as making everything immutable (Lord praise Haskell!).

BUT ALAS! In JavaScript you always operate on a single thread. Even web workers (each runs inside a separate context). So since you can’t have a thread related race condition inside your execution context (all those lovely global variables and closures), the main point in favour of Immutability goes out the window.

(Having said that, there is an advantage to using pure functions in web workers, which is that you’ll have no expectations about fiddling with objects on the main thread.)

2) Immutability can (somehow) avoid race conditions in the state of your app.

And here is the real crux of the matter, most (React) developers will tell you that Immutability and FP can somehow work this magic that allows the state of your application to become predictable.

Of course this doesn’t mean that you can avoid race conditions in the database, to pull that one off you’d have to coordinate all users in all browsers, and for that you’d need a back-end push technology like WebSockets (more on this below) that will broadcast changes to everyone running the app.

Nor does it mean that there is some inherent problem in JavaScript where your application state needs immutability in order to become predictable, any developer that has been coding front-end applications before React would tell you this.

This rather confusing claim simply means that if you use React your application is prone to race conditions, but that immutability allows you to take that pain away. Why? Because React is special.. its been designed first and foremost as a highly optimised rendering library with state management subverted to that aim, and thus component state is managed via an asynchronous chain of events (aka “one-way data binding”) that optimize rendering but you have no control over and rely on you remembering not to mutate state directly

Given this context, its easy to see how the need for immutability has little to do with JavaScript and a lot to do with React: if have a bunch of inter-dependent changes in your spanking new application and no easy way to figure out what your state is currently at, you are going to get confused, and thus it makes perfect sense to use immutability to track every historical change.

3) Race conditions are categorically bad.

Well, they might be if you are using React. But they are rare if you pick up a different framework.

Besides, you normally have far bigger problems to deal with… Problems like dependency hell. Like a bloated code-base. Like your CSS not getting loaded. Like a slow build process or being stuck to a monolithic back-end that makes iterating almost impossible. Like inexperienced devs not understanding whats going on and making a mess of things.

You know. Reality. But hey, who cares about that?

4) Immutability makes use of Reference Types to reduce the performance impact of tracking every state change.

Because seriously, if you are going to copy stuff every time your state changes, you better make sure you are smart about it.

5) Immutability allows you to UNDO stuff.

Because er.. this is the number one feature your project manager is going to ask for, right?

6) Immutable state has lots of cool potential in combination with WebSockets

Last but not least, the accumulation of state deltas makes a pretty compelling case in combination with WebSockets, which allows for an easy consumption of state as a flow of immutable events

Once the penny drops on this concept (state being a flow of events — rather than a crude set of records representing the latest view), the immutable world becomes a magical place to inhabit. A land of event-sourced wonder and possibility that transcends time itself. And when done right this can definitely make real-time apps easier to accomplish, you just broadcast the flow of events to everyone interested so they can build their own representation of the present and write back their own changes into the communal flow.

But at some point you wake up and realise that all that wonder and magic do not come for free. Unlike your eager colleagues, your stakeholders (yea, the people who pay you) care little about philosophy or fashion and a lot about the money they pay to build a product they can sell. And the bottom line is that its harder to code for immutability and easier to break it, plus there is little point having an immutable front-end if you don’t have a back-end to support it. When (and if!) you finally convince your stakeholders that you should publish and consume events via a push techology like WebSockets, you find out what a pain it is to scale in production.

 


Now for some advice, should you choose to accept it.

A choice to write JavaScript using FP/Immutability is also a choice to make your application code-base larger, more complex and harder to manage. I would strongly argue for limiting this approach to your Redux reducers, unless you know what you are doing… And IF you are going to go ahead and use immutability regardless, then apply immutable state to your whole application stack, and not just the client-side, as you’re missing the real value of it otherwise.

Now, if you are fortunate enough to be able to make choices in your work, then try and use your wisdom (or not) and do what’s right by the person who is paying you. You can base this on your experience, on your gut, or whats going on around you (admittedly if everyone is using React/Redux then there a valid argument that it will be easier to find a resource to continue your work).. Alternatively, you can try either Resume Driven Development or Hype Driven Development approaches. They might be more your sort of thing.

In short, the thing to be said for immutability is that it will make you fashionable with your peers, at least until the next craze comes around, by which point you’ll be glad to move on.


And the reply!

Hello Steven, Yes. I had all these doubts when I considered immutable.js and redux. But, your answer is amazing! It adds lot of value and thanks for addressing every single point that I had doubts on. It’s so much more clear/better now even after working for months on immutable objects. – bozzmob

How to Build a Cross-Platform Metal Detector App

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-1798 dw-article type-dw-article status-publish hentry category-code category-mobile category-plugins tag-android tag-cordova tag-iphone tag-javascript">

coin-detector-xplatform-600px

Did you know that your smartphone has an in-built magnetometer to power its compass?

The earth’s magnetic field is big but not particularly strong, and that means that the magnetometer in your phone is sensitive enough to detect small ferrous metal objects such as a coin, a large nail or a piece of cutlery which also generate small magnetic fields of their own. In other words YOU DONT HAVE TO GO AND BUY A METAL DETECTOR, you’ve ALREADY GOT ONE INSIDE YOUR PHONE! We just have to unlock its potential.

Do you want to try a bit of hacking?

In this post, I’m going to show you how you can build your own cross-platform metal detector app for BOTH  iPhone and Android phones, this app will interface with the magnetometer hardware in your phone and notify you of any ferrous metal or magnetic objects nearby.

This should take from 30 mins to 2 hrs of your time for either platform and depending on the speed of your internet connection, perhaps longer if you have bad luck and something doesn’t work straight away.

Introducing Cordova (or PhoneGap, for Adobe fans)

The Apache Cordova project is an open source initiative aimed at bundling a HTML page (ie `index.html` – such as the one below) into a native app that will work across just about every smartphone device. This is great news because it means that with you only need to code your app once and it works on multiple devices!

# index.html

<html>
<head><title>My First App</title></head>
<body>
<button onclick=”alert(‘World’)”>Hello</button>
</body>
</html>

PhoneGap is the same thing, but with a snazzier name trademarked by Adobe (the Photoshop people) and with an online service that gives you a wizard-like interface for building apps so you dont have to deal with obscure command line errors, but then you have to pay Adobe for the favour.

So what does this ‘Cordova’ thingy do for me?

Using Cordova means that you don’t have to know how to setup a mobile app project because its all done for you! Cordova will basically generate all the folders and files you need to get started with a basic HTML ‘Hello World’ app for each platform.

Cordova also has a powerful ‘plugin’ framework that allows access to the phone’s hardware, in this post we will be downloading 2 different plugins to unlock the Magnetometer and Vibration functionality inside your phone.

To get started with a Cordova app you need to install NodeJS (http://nodejs.org) which is a powerful javascript command line tool that Cordova uses to do most of the donkey-work for setting up your project.

The Code

Go have a look at the following ‘index.html‘ file in Github, it contains all the code you need for a basic magnetometer app:

https://github.com/sdesalas/cordova-magnetometer-app/blob/master/www/index.html

Save it to your computer:

1. Right-click on ‘Raw‘ button
2. Choose ‘Save link as..’ on the context menu
3. Put it somewhere sensible where you can find it

Hey, this is HTML right?

Correct, that means your browser can open it.

Double click to open it on a browser. You should get something like this:

Err.. Why?

Well, your mobile has a magnetometer in it your PC doesn’t. So all you’ll get out of it in a desktop browser is a ZERO reading.

And thats ok, it just means everything is working as expected.

 

The Android App

What You’ll need:

1. A PC or Mac with NodeJS, Java (JDK7) and Android Studio installed.
2. An Android (4.0+) device and USB cable for testing.
3. Basic understanding of a command line interface (you can navigate between folders).

CORPORATE FIREWALLS: Note that the steps below may not work behind a corporate firewall, you may need extra steps to set up internet access via a corporate proxy.

You’ve already installed NodeJS right? If its a old version (older than 0.11) you might want to download the latest and re-install it.

Ok, next you will need Java JDK 7 installed and Android Studio, these are needed to create a native app for Android. You can get them on the following links:

JDK7: http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html -> Remember x86=32bit, x64=64bit

Android Studio: https://developer.android.com/sdk/index.html

BEWARE: after installing/running Android Studio, it will want to dowload the latest Android SDKs (the bits of functionality that enable your PC to talk to the phone hardware), this can take anywhere from 10 minutes and up to a couple of hours over a slow internet connection.

Installing Cordova

We are now going to install Cordova and create an Android project.

Open up a command line window and type :

> npm install -g cordova

Creating the Android Project

Next, navigate to your projects folder and create a new cordova project with android support:

> cordova create magnetometer my.magnetometer magnetometer
> cd magnetometer
> cordova platform add android

If all goes well, this should have created a bunch of files and folders (ie a project structure) within your ‘magnetometer’ project folder, you should be able to see them using a file explorer:

Copying our Metal Detector Code

The folder ‘www‘ contains our web code. Here is where we’ll be placing the index.html file that we downloaded before.

Building the App

Go back to the command line and run an extra couple of commands to ‘unlock’ the vibration and magnetometer functionality on your phone.

> cordova plugin add cordova-plugin-vibration
> cordova plugin add cordova-plugin-magnetometer

Next we are going to ‘build’ the app, so you can use it on your android phone. We need one more cordova command to ‘prepare’ the app:

> cordova prepare

Then follow these instructions:

1. Open Android Studio
2. On the Welcome Prompt. Select ‘Import Project’
3. Click on the ‘platforms\android’ folder inside your project.
4. Press ‘OK’.
5. Press ‘OK’ again if you get an extra prompt about Gradle settings.
6. Wait for the progress bar to finish.

This will create an android project, next we will build the APK file that can be used to run the magnetometer app on your phone.

Make sure you’ve waited until all the progress bars are finished before the next steps.

1. Android Studio will need to get all required Java libraries that the project depends on, this usually takes some time. Did you really wait until all progress bars and loading messages were finished?
2. Ok, click on the ‘Play’ button at the top menu as per screen below.
3. This will trigger a build using Gradle. You can see the progress on the ‘Gradle Console’ window.
4. When ready, a popup will appear asking if you want to run the emulator. DO NOT press OK here. It will grind your computer to a halt. Just press ‘Cancel’ on the popup.

Installing the App on your Android Phone

Next we are going to email ourselves the completed app.

1. Navigate to your ‘magnetometer’ project folder using the file explorer.
2. Click ‘platforms’ -> ‘android’ -> ‘build’ -> ‘outputs’ -> ‘apk’
3. Find the file ‘android-debug.apk

4. Next. Email it to yourself.
5. Go to your Android phone.
6. Open ‘Settings’ > ‘Security’
7. Turn ON ‘Unknown Sources’ (you can turn it back OFF later)
8. Open your email, click on the attached ‘APK’ file.
9. You should then get an installation screen as per below.

10. Go to your Apps and look for ‘magnetometer’, then open it.
11. READY! Start finding some metal objects!

The iOS App

What You’ll need:

– A newish Mac with NodeJS and XCode installed
– An Apple Developer membership (US$99)
– An iPhone (4.0+) and USB cable for testing.
– Basic understanding of a command line interface (you can navigate between folders).

Installing Cordova

1. Make sure you have NodeJS installed and its at a reasonably new version (0.12+).
2. If you need to update the version of node. Do it following this article.
3. Open a terminal window (Look for the ‘Terminal’ app in Launchpad)
4. Type the following command to install Cordova :

$ sudo npm install -g cordova

5. You’ll be prompted to enter a password for your user account (on the mac).
6. There will probably be some warnings when installing Cordova (“WARN”), you can safely ignore these. If you get any “ERROR”s they are a bit more of a problem and will need attention before continuing. Type the errors into Google for help.

Creating the iOS Project

Once Cordova has been installed, we are going to create the project structure for our magnetometer project. Firstly, navigate to your projects folder. The commands to use are :

$ cordova create magnetometer my.magnetometer magnetometer
$ cd magnetometer
$ cordova platform add ios
$ cordova plugin add cordova-plugin-vibration
$ cordova plugin add cordova-plugin-magnetometer

Copying our Metal Detector App Code

This should have created your project structure as per following screenshot. Make sure you delete the contents of the ‘www’ folder and replace them with the ‘index.html’ file you downloaded earlier.

You will then need to run the following command on your ‘magnetometer’ folder to ‘prepare’ the files (it will copy the ‘index.html’ to the XCode build folder).

$ cordova prepare

Building the iOS App

And now we can continue to XCode by finding the file ‘magnetometer.xcodeproj’ inside our project folder (look inside the sub-folder ‘platforms’ and ‘ios’). Just double-click on it and it will open (if you have XCode installed).

Make sure you are signed in as an apple developer. Joining costs US$99. You cannot drop an app on your iOS device without one of these:

1. Go to ‘XCode’ (on the top menu)
2. Select ‘Preferences…’, then ‘Accounts’
3. Click the plus (+) sign to add an account
4. Enter your Apple Developer email and password

Running the App on your iPhone

Then just go and run the app! This bit is even simpler than for Android!

1. Connect your iPhone to your Mac using the USB cable.
2. Select your iPhone as the device ‘target’ on the top menu
3. Press the ‘Play’ button to run the build.

And VOILA! Now start finding some metal with your iPhone!

And thats IT!

That concludes my post for today, thanks for taking an interest. I hope you didn’t have to bang your head against the keyboard too much to get this working.

Please feel free to write your comments and experiences below.

If you liked it then you are welcome to shout me a beer or help me out by sharing it with your friends and colleagues.

BYE!!!

TrifleJS

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-1739 post type-post status-publish format-standard has-post-thumbnail hentry category-code category-ui tag-c tag-distributed-computing tag-github tag-javascript tag-net-framework tag-test-driven-development">

triflejs-org

AutoTrader UK

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-1723 post type-post status-publish format-standard hentry category-code category-ui category-websites tag-agile tag-backbone-js tag-html5 tag-jasmine tag-javascript tag-jquery tag-performance-optimization tag-test-driven-development">

This was a role working on Interface Development for Trader Media’s online marketplace with over 1 billion page impressions per month.

autotrader_main

I mainly focused on creating rich interfaces and user journeys with Event-driven and Object-Oriented JavaScript, DOM, HTML5, jQuery, AJAX, JSON and Backbone.js MVC framework continuously tested with Jasmine/Rhino.

autotrader_gallery

Our team used Agile development approach with Continuous Integration (CI), pair-programming and short development cycles based on Thoughtworks Extreme Programming (XP) model for Java/JUnit TDD projects.

autotrader_agile_dave

25 Techniques for Javascript Performance Optimization

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-1177 dw-article type-dw-article status-publish has-post-thumbnail hentry category-code category-ui tag-javascript tag-performance-optimization">

These are some the techniques I use for enhancing the performance of JavaScript, they have mostly been collected over various years of using the language to improve the interactivity of websites and web applications.

My thanks go out to Marco of zingzing.co.uk for reminding me of the importance of optimizing JavaScript, and for teaching me some of the techniques below.

Most of the techniques involve common sense once you have understood the underlying problem. I’ve categorised them into 5 broad categories, each with an underlying problem and solution as follows:

1. Avoid interaction with host objects

Watch out for these guys. Repeated interaction with them will kill your performance.

THE PROBLEM:

Native JavaScript is compiled into machine code by most scripting engines offering incredible performance boost, however interaction with host (browser) objects outside the javascript native environment raises unpredictability and considerable performance lag, particularly when dealing with screen-rendered DOM objects or objects which cause Disk I/O (such as WebSQL).

THE SOLUTION:

You can’t really get away from them, but keep your interaction with host objects to an absolute minimum.

THE TECHNIQUES:

  1. Use CSS classes instead of JavaScript for DOM animation.

    Its a good habit to try an implement any animation (or DOM interaction) with CSS if you can get away with it. CSS3 Transitions have been around for a while now, so there are few excuses not to use them. You can even use a polyfill if you are worried about older browsers. Think also of hover menus using the :hover pseudo-class, or styling and display of elements using @keyframes, :before and :after, this is because unlike JavaScript, CSS solutions are heavily optimized by the browser, often down to the level of using the GPU for extra processing power.

    I realize this might sound like irony (if you want to optimize JavaScript – avoid using it for animation), but the reality is that this technique is executed from within your JavaScript code, it just involves putting more effort on the CSS classes.

  2. Use fast DOM traversal with document.getElementById().

    Given the availability of jQuery, it is now easier than ever to produce highly specific selectors based on a combination of tag names, classes and CSS3. You need to be aware that this approach involves several iterations while jQuery loops thorough each subset of DOM elements and tries to find a match. You can improve DOM traversal speeds by picking nodes by ID.

    // jQuery will need to iterate many times until it finds the right element
    var button = jQuery('body div.dialog > div.close-button:nth-child(2)')[0];
    
    // A far more optimized way is to skip jQuery altogether.
    var button = document.getElementById('dialog-close-button');
    
    // But if you need to use jQuery you can do it this way.
    var button = jQuery('#dialog-close-button')[0];
    
  3. Store pointer references to in-browser objects.

    Use this technique to reduce DOM traversal trips by storing references to browser objects during instantiation for later usage. For example, if you are not expecting your DOM to change you should store a reference to DOM or jQuery objects you are going to use when your page is created; if you are building a DOM structure such as a dialog window, make sure you store a few handy reference to DOM objects inside it during instantiation, so you dont need to find the same DOM object over an over again when a user clicks on something or drags the dialog window.

    If you haven’t stored a reference to a DOM object, and you need to iterate inside a function, you can create a local variable containing a reference to that DOM object, this will considerably speed up the iteration as the local variable is stored in the most accessible part of the stack.

  4. Keep your HTML super-lean (get rid of all those useless DIV and SPAN tags)

    This is extremely important, the time needed to query and modify DOM is directly proportional the the amount and complexity of HTML that needs to be rendered.

    Using half the amount of HTML will roughly double the DOM speed, and since DOM creates the greatest performance drag on any complex JavaScript app, this can produce a considerable improvement. See ‘Reduce Number of DOM Elements’ guidance in Yahoo YSlow.

  5. Batch your DOM changes, especially when updating styles.

    When making calls to modify DOM make sure you batch them up so as to avoid repeated screen rendering, for example when applying styling changes. The ideal approach here is to make many styling changes in one go by adding or removing a class, rather than apply each individual style separately. This is because every DOM change prompts the browser to re-render the whole UI using the boxing model. If you need to move an item across the page using X+Y coordinates, make sure that these two are applied at the same time rather than separately. See these examples in jQuery:

    // This will incurr 5 screen refreshes
    jQuery('#dialog-window').width(600).height(400).css('position': 'absolute')
                           .css('top', '200px').css('left', '200px');
    // Let jQuery handle the batching
    jQuery('#dialog-window').css({
         width: '600px',
         height: '400px',
         position: 'absolute',
         top: '200px',
         left: '200px'
    );
    // Or even better use a CSS class.
    jQuery('#dialog-window').addClass('mask-aligned-window');
  6. Build DOM separately before adding it to the page.

    As per the last item, every DOM update requires the whole screen to be refreshed, you can minimize the impact here by building DOM for your widget ‘off-line’ and then appending your DOM structure in one go.

  7. Use buffered DOM inside scrollable DIVs.

    This is an extension of the fourth point above (Keep HTML super-lean), you can use this technique to remove items from DOM that are not being visually rendered on screen, such as the area outside the viewport of a scrollable DIV, and append the nodes again when they are needed. This will reduce memory usage and DOM traversal speeds. Using this technique the guys at ExtJS have managed to produce an infinitely scrollable grid that doesn’t grind the browser down to a halt.

2. Manage and Actively reduce your Dependencies

Poorly managed JavaScript dependencies degrade user experience.

THE PROBLEM:

On-screen visual rendering and user experience is usually delayed while waiting for script dependencies load onto the browser. This is particularly bad for mobile users who have limited bandwidth capacity.

THE SOLUTION:

Actively manage and reduce dependency payload in your code.

THE TECHNIQUES: 

  1. Write code that reduces library dependencies to an absolute minimum.

    Use this approach to reduce the number of libraries your code requires to a minimum, ideally to none, thus creating an incredible boost to the loading times required for your page.

    You can reduce dependency on external libraries by making use of as much in-browser technology as you can, for example you can use document.getElementById('nodeId') instead of jQuery('#nodeId'), or document.getElementsByTagName('INPUT') instead of jQuery('INPUT') which will allow you to get rid of jQuery library dependency.

    If you need complex CSS selectors use Sizzle.js instead of jQuery, which is far more lightweight (4kb instead of 80kb+).

    Also, before adding any new library to the codebase, evaluate whether or you really need it. Perhaps you are just after 1 single feature in the whole library? If that’s the case then take the code apart and add the feature separately (but don’t forget to check the license and acknowledge author if necessary).

  2. Minimize and combine your code into modules.

    You can bundle distinct components of your application into combined *.js files and pass them through a javascript minimizer tool such as Google Closures or JsMin that gets rid of comments and whitespacing.

    The logic here is that a single minimized request for a 10Kb .js file completes faster than 10 requests for files that are 1-2kb each due to lower bandwidth usage and network latency.

  3. Use a post-load dependency manager for your libraries and modules.

    Much of your functionality will not need to be implemented until after the page loads. By using a dependency manager (such as RequireJS or Webpack) to load your scripts after the page has completed rendering you are giving the user a few extra seconds to familiarise themselves with the layout and options before them.

    Make sure that your dependency manager can ‘remember’ which dependencies have been loaded so you dont end up loading the same libraries twice for each module. See guidance for Pre-Loading and Post-loading in Yahoo YSLow, and be mindful about loading only what is necessary at each stage of the user journey.

  4. Maximise use of caching (eTags, .js files, etc).

    Cache is your best friend when it comes to loading pages faster. Try to maximise the use of cache by applying ETags liberally and putting all your javascript into files ending in *.js found in static URI locations (avoid dynamic Java/C# bundle generations ending with *.jsp and *.ashx) . This will tell the browser to use the locally cached copy of your scripts for any pages loaded after the initial one.

  5. Move scripts to the end of the page (not recommended).

    This is the lazy way of handling post-load dependencies, ideally you should implement a post-load dependency manager, but if you only have one or two scripts to load into the page you can add them at the very end of the HTML document where the browser will start loading them after the page is rendered, giving the user a few extra seconds of interaction.

3. Be disciplined with event binding

Be a ninja when using event handling.

THE PROBLEM:

Browser and custom event handlers are an incredible tool for improving user experience and reducing the depth of the call stack (so you avoid having a function calling a function which calls another function etc), but since they are hard to track due to their ‘hidden’ execution they can fire many times repeatedly and quickly get out of hand, causing performance degradation.

THE SOLUTION:

Be mindful and disciplined when creating event handlers. Get to know your weapons too, if you are using a framework then find out what’s going on underneath the hood.

THE TECHNIQUES:

  1. Use event binding but do it carefully.

    Event binding is great for creating responsive applications. However, it is important that you walk through the execution and various user journeys to make sure they are not firing multiple times or using up unnecessary resources behind the scenes. Comment your code well so they next guy (which may be you a few months down the line) can follow what’s going on and avoid this issue as well.

    If you are using AngularJS make sure you are getting rid of unnecessary ‘watchers’. These are background events that involve heavy processing and will slow down your app, particularly on mobile devices.

  2. Pay special attention event handlers that fire in quick succession (ie, ‘mousemove’).

    Browser events such as ‘mousemove’ and ‘resize’ are executed in quick succession up to several hundred times each second, this means that you need to ensure that an event handler bound to either of these events is coded optimally and can complete in less than 2-3 milliseconds.

    Any overhead greater than that will create a patchy user experience, specially in browsers such as IE that have poor rendering capabilities.

  3. Remember to unbind events when they are not needed.

    Unbinding events is almost as important as binding them. When you add a new event handler to your code make sure that you provide for it to stop firing when it is no longer needed, ideally using once-off execution constructs like jQuery.one() or coding in the unbind behaviour at that point. This will avoid you having the same handler bound multiple times degrading performance, or events firing when they are no longer needed, this often points to memory leaks in your app as pointed out by the React developers in this blog post.

    If you are using jQuery to bind and unbind events, make sure your selector points to a unique node, as a loose selector can create or remove more handlers than you intend to.

  4. Learn about event bubbling.

    If you are going to use event handlers, it is important that you understand how event bubbling propagates an event up the DOM tree to every ancestor node. You can use this knowledge to limit your dependency on event bubbling with approaches such as jQuery.live() and jQuery.delegate() that require full DOM traversal upon handling each event, or to  stop event bubbling for improved performance. See this great post on the subject.

  5. Use ‘mouseup’ instead of ‘click’.

    Remember that user interaction via the mouse or keyboard fires several events in a specific order. It is useful to remember the order in which these events fire so you can squeeze in your functionality before anything else gets handled, including native browser event handlers.

    A good example of this is to bind your functionality to the ‘mouseup’ event which fires before the ‘click’ event, this can produce a surprising performance boost in older browsers such as IE, making the difference between handling every interaction or missing some of the action if the user triggers clicks many times in succession.

4. Maximise the efficiency of your iterations

Performance becomes critical during long iterations.

THE PROBLEM:

Due to the processing time used, iterations are usually the first places where you can address performance flaws in an application.

THE SOLUTION:

Get rid of unnecessary loops and calls made inside loops.

THE TECHNIQUES:

  1. Harness the indexing power of JavaScript objects.

    Native JavaScript objects {} can be used as powerful Hashtable data structures with quick-lookup indexes to store references to other objects, acting similarly to the way database indexes work for speeding up search operations by preventing needless looping. 

    So why bother iterating to find something? You can simply use a plain object as an index (think of a phone-book) to get to your desired item quickly and efficiently. Here is an example as follows:

    var data = {
      index: {
                "joeb": {name: "joe", surname: "bloggs", age: 29 },
                "marys": {name: "mary", surname: "smith", age: 25 }
                // another 1000 records
             },
      get: function(username) {
                return this.index[username];
             }
    }
  2. Harness the power of array structures with push() and pop() and shift().

    Array push() pop() and shift() instructions have minimal processing overhead (20x that of object manipulation) due to being language constructs closely related to their low-level assembly language counterparts. In addition, using queue and stack data structures can help simplify your code logic and get rid of unnecessarily loops. See more on the topic in this article.

  3. Take advantage of reference types.

    JavaScript, much like other C-based languages, has both primitive and reference value types. Primitive types such as strings, booleans and integers are copied whenever they are passed into a new function, however reference types such as arrays, objects and dates are passed only as a light-weight reference.You can use this to get the most performance out of recursive functions, such as by passing a DOM node reference recursively to minimise DOM traversal, or by passing a reference parameter into a function that executes within an iteration. Also, remember that comparing object references is far more efficient than comparing strings.

  4. Use Array.prototype.join() for string concatenation.

    PLEASE NOTE: This article was first written in 2012 when string concatenation was a hazard to be aware of. However, these days most JavaScript engines have compilation tricks that have made this issue obsolete. The wording below is only really relevant for historical purposes.

    Joining strings using the plus sign (ie var ab = 'a' + 'b';) creates performance issues in IE when used within an iteration. This is because, like Java and C#, JavaScript uses unmutable strings.

    Basically, when you concatenate two strings, a third string is constructed for gathering the results with its own object instantiation logic and memory allocation. While other browsers have various compilation tricks around this, IE is particularly bad at it.A far better approach is to use an array for carrying out the donkey work, creating an array outside the loop, using push() to add items into to the array and then a join() to output the results. See this link for a more in-depth article on the subject.

5. Become friends with the JavaScript lexicon

Become a friend of the ECMA Standard and it make your code faster.

THE PROBLEM:

Due to its loosely-typed and free-for-all nature, JavaScript can be written using a very limited subset of lexical constructs with no discipline or controls applied to its use. Using simple function patterns repetitively often leads to poorly thought-out ‘spaghetti’ code that is inefficient in terms of resource use.

THE SOLUTION:

Learn when and how to apply the constructs of the ECMAScript language standard to maximise performance.

THE TECHNIQUES:

  1. Shorten the scope chain

    In JavaScript, whenever a function is executed, a set of first order variables are instantiated as part of that function. These include the immediate scope of a function (the this variable) with its own scope chain, the arguments of the function and all locally-declared variables.

    If you try and access a globally-declared variable or a closure further up the scope chain, it will take extra effort to traverse up the chain every level util the compiler can wire up the variable you are after. You can thus improve execution by reducing the depth of the call stack, and by only using the local scope (this), the arguments of the function, as well as locally declared variables. This article explains the matter further.

  2. Make use of ‘this’, by passing correct scope using ‘call’ and ‘apply’.

    This is particularly useful for writing asynchronous code using callbacks, however it also improves performance because you are not relying on global or closure variables held further up the scope chain. You can get the most out of the scope variable (this) by rewiring it using the special call() and apply() methods that are built into each function. See the example below:

    var Person = Object.create({
      init: function(name) {
         this.name = name;
      },
      do: function(callback) {
         callback.apply(this);
      }
    });
    var john = new Person('john');
    john.do(function() {
        alert(this.name); // 'john' gets alerted because we rewired 'this'.
    });
  3. Learn and use native functions and constructs.

    ECMAScript provides a whole host of native constructs that save you having to write your own algorithms or rely on host objects. Some examples include Math.floor(), Math.round(), (new Date()).getTime() for timestamps, String.prototype.match() and String.prototype.replace() for regexes, parseInt(n, radix) for changing numeral systems, === instead of == for faster type-based comparsion, instanceof for checking type up the hierarchy, & and | for bitwise comparisons. And the list goes on and on.

    Make sure you use all these instead of trying to work out your own algorithms as you will not only be reinventing the wheel but affecting performance.

  4. Use ‘switch’ instead of lengthy ‘if-then-else’ statements.

    This is because  ‘switch’ statements can be optimized more easily during compilation. There is an interesting article in O’Reily about using this approach with JavaScript.

 

WP Simple SpamCheck

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-1378 post type-post status-publish format-standard hentry category-cms category-code tag-php tag-wordpress">

This plugin allows WordPress to block over 95% of comments using a time-based hash. 

This allows for a minimum sanity check and yet should remove almost all spam comments without the need to sign up to any third party APIs.

You are now welcome to install and use this WordPress Plugin I developed out of frustration of having to sign up and pay for a key to the Akismet API services, and yet knowing that a simple time-based input validation could help get rid of the majority of my spam comments.

This is what the plugin looks like once installed.

So far I have been using this plugin myself for the past 12 months and I am very happy with the results. I normally receive around 400-600 spam comments a week and this has cut that down to an average of 1-2 which is far more manageable.

The solution is pretty low-tech, it only took about 2 days to put it together using some time-validation techniques I’ve successfully used in the past for one of my other websites (www.valuetrader.info).

The plugin is pretty effective given the lack of sophistication employed by the majority of spam bots however it is not very advanced and for that reason some spam comments may still make it through.

To install follow these instructions:

1. Download the file wp-simple-spamcheck.zip to your desktop.
2. Open the ‘Plugins’ section of your site
3. Click on ‘Add New’ and then ‘Upload’
4. Select the ‘wp-simple-spamcheck.zip‘ file you just saved and press ‘Install Now’.
5. Click ‘Apply’ once the installation has completed.
6. Hopefully, the ‘(Spamcheck Enabled)’ message should appear when entering comments.

Please be aware that some templates may not be able to implement this spam check plugin, if the ‘(Spamcheck Enabled)’ message does not appear then just uninstall and search for a different plugin from the other available options.

If you have installed this plugin and you find it useful. Please give it a rating in the WordPress plugin website so that other users can see it.

If you have any other problems just drop me a line here.

Object-Oriented JavaScript Inheritance

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-995 dw-article type-dw-article status-publish has-post-thumbnail hentry category-code category-ui tag-javascript tag-object-orientation">

This is simple but crucial stuff in JavaScript. Its easy to forget how to do object-oriented inheritance from scratch when you are dealing with several JS frameworks and each of them has pre-built methods that support this functionality in a slightly different way.

JavaScript is a functional language that uses prototypal inheritance. That means you can use classical inheritance such as that supported by Java and C#, however you need to be disciplined about the way you write code and need do a couple of things every time you add a class to your type hierarchy.

When creating a new class, we are essentially creating a function and assigning a new instance of another function as the prototype. This is the basis of ‘prototypal’ inheritance, ie. if you are dealing with ‘Dog’ type that inherits from an ‘Animal’ class, the template for your dog is a newly created instance of animal.

For Example:

// Create a new class called 'Animal'
var Animal = function() {};

// Create an instance of the 'Animal'
var anAnimal = new Animal();

// Create a class called 'Dog'
var Dog = function() {};

// Assign the template
Dog.prototype = anAnimal;

// Instantiate the dog
var lassie = new Dog();

Abstracting inheritance

I’ve created a simple static method called ‘extend’ on the Object class. Most frameworks do this in one shape or another (ie ExtJS uses Ext.extend(), John Resig likes to use Class.extend() method, Mootools uses new Class(properties) approach etc).

A ‘static’ method means that only the ‘Object’ class template will have it, so we do not pass it over to our children through inheritance. See the following code:

// Create a static 'extends' method on the Object class
// This allows us to extend existing classes
// for classical object-oriented inheritance
Object.extend = function(superClass, definition) {
    var subClass = function() {};
    subClass.prototype = new superClass();
    for (var prop in definition) {
        subClass.prototype[prop] = definition[prop];
    }
    return subClass;
};

This allows us to simplify the process of using object-oriented inheritance by abstracting the prototype assignment into this separate function call.

// Create an 'Animal' class by extending
// the 'Object' class with our magic method
var Animal = Object.extend(Object, {
    move : function() {alert('moving...');}
});

// Create a 'Dog' class that extends 'Animal'
var Dog = Object.extend(Animal, {
    bark : function() {alert('woof');}
});

// Instantiate Lassie
var lassie = new Dog();

// She can move AND bark!
lassie.move();
lassie.bark();

Adding Constructors

But, we seem to have forgotten one thing. What about constructors?

JavaScript uses the function itself as the constructor for a new object. Thus, when we created the Animal object above we could have simply added some sample code to its constructor as follows:

// Assign the name of the animal when it gets instantiated
var Animal = new function(name) {
  this.name = name;
}

var lassie = new Animal('Lassie');
alert('My pets name is: ' + lassie.name);

Thus we can augment our class creation methodology using the same idea:

// Create a 'Subclass' with a constructor
var SubClass = Object.extend(SuperClass, {
    constructor: function(parameter) {
        this.x = parameter;
    },
    method1: function() {
        // ...
    }
});

// Instantiate it
var subClass = new SubClass('value');
alert(subClass.x);

In order to allow this kind of notation, we can modify our ‘extend’ method to take care of this special ‘constructor’ function. This is done as follows:

// Create a static 'extends' method on the Object class
// This allows us to extend existing classes
// for classical object-oriented inheritance
Object.extend = function(superClass, definition) {
    var subClass = function() {};
    // Our constructor becomes the 'subclass'
    if (definition.constructor !== Object)
        subClass = definition.constructor;
    subClass.prototype = new superClass();
    for (var prop in definition) {
    	if (prop != 'constructor')
            subClass.prototype[prop] = definition[prop];
    }
    return subClass;
};

And put it to use by writing minimal code that is meaningful to read and leaves the object-creation abstraction sitting behind the scenes:

// Create the 'Animal' class by extending
// the 'Object' class with our magic method
// this time using a constructor
var Animal = Object.extend(Object, {
    constructor: function(name) {
        this.name = name;
    },
    move: function() {
        alert('moving...');
    }
});

// Instantiate Lassie (as an animal)
var lassie = new Animal('Lassie');

// Now lassie has a name which is
// defined inside the object constructor
alert('My pets name is: ' + lassie.name);

Calling constructors through the inheritance chain.

Now we have come up against a small problem, our ‘Animal’ class can have a constructor, but its child class ‘Dog’ has no way to call this constructor so it can take advantage of the logic for instantiating all animals.

In order to do this we are going to add a special ‘superClass’ property to all of our classes automatically, then use the magic ‘call’ function to call this using the context of the dog instance (to put it in plain english, this will make each ‘Dog’ run the logic for an ‘Animal’).

Lets see, all together now:

// Create a static 'extends' method on the Object class
// This allows us to extend existing classes
// for classical object-oriented inheritance
Object.extend = function(superClass, definition) {
    var subClass = function() {};
    // Our constructor becomes the 'subclass'
    if (definition.constructor !== Object)
        subClass = definition.constructor;
    subClass.prototype = new superClass();
    for (var prop in definition) {
    	if (prop != 'constructor')
            subClass.prototype[prop] = definition[prop];
    }
    // Keep track of the parent class
    // so we can call its constructor too
    subClass.superClass = superClass;
    return subClass;
};

// Create the 'Animal' class by extending
// the 'Object' class with our magic method
// this time using a constructor
var Animal = Object.extend(Object, {
    constructor: function(name) {
        this.name = name;
    },
    move: function() {
        alert('moving...');
    }
});

// Create a 'Dog' class that inherits from it
var Dog = Object.extend(Animal, {
    constructor: function(name) {
        // Remember to call the super class constructor
        Dog.superClass.call(this, name);
    },
    bark: function() {
        alert('woof');
    }
});

// Instantiate Lassie
var lassie = new Dog('Lassie');

// She can move AND bark AND has a name!
lassie.move();
lassie.bark();
alert('My pets name is: ' + lassie.name);

Multiple inheritance using Interfaces

Now its possible to add support for multiple inheritance to our object creation syntax in Javascript. In single-inheritance object-oriented languages such as Java and C#, multiple inheritance (inheriting properties and methods from more than one class chain) is done using Interfaces.

The implementation is a bit more crude than in Java or C#, the reason why is because Javascript, unlike these languages, is not compiled so we cant throw compilation-time errors. This means that when we implement an interface we are limited to checking that the appropriate members are there at runtime and if not we through an error.

So, lets start first with our intended usage for Interfaces:

var SubClass = Object.extend(SuperClass, {
     method1: function() {
         alert('something');
     }
}).implement(Inteface1, Interface2 ...);

Going by this approach, we need to create a method ‘implement’ for every class, since in Javascript our classes are instances of the ‘Function’ object, this means adding this method to Function.prototype.

The code does start to be a bit hairy from this point, its important to point out that the context of this function call (ie this) is the class that we are testing for implementation of a particular interface.

Function.prototype.implement = function() {
    // Loop through each interface passed in and then check
    // that its members are implemented in the context object (this)
    for(var i = 0; i < arguments.length; i++) {
         var interf = arguments[i];
         // Is the interface a class type?
         if (interf.constructor === Object) {
             for (prop in interf) {
                 // Check methods and fields vs context object (this)
                 if (interf[prop].constructor === Function) {
                     if (!this.prototype[prop] ||
                          this.prototype[prop].constructor !== Function) {
                          throw new Error('Method [' + prop
                               + '] missing from class definition.');
                     }
                 } else {
                     if (this.prototype[prop] === undefined) {
                          throw new Error('Field [' + prop
                               + '] missing from class definition.');
                     }
                 }
             }
         }
    }
    // Remember to return the class being tested
    return this;
}

And there you have it! Its important to point out that this is one of many ways to abstract object-orientation in Javascript. There are different approaches, and as a programmer it helps to pick one you are more comfortable with. Personally, I like this approach because of its simplicity, readability, and ease of use:

// Create a 'Mammal' interface
var Mammal = {
    nurse: function() {}
};

// Create the 'Pet' interface
var Pet = {
    do: function(trick) {}
};

// Create a 'Dog' class that inherits from 'Animal'
// and implements the 'Mammal' and 'Pet' interfaces
var Dog = Object.extend(Animal, {
     constructor: function(name) {
          Dog.superClass.call(this, name);
     },
     bark: function() {
          alert('woof');
     },
     nurse: function(baby) {
          baby.food = 100;
     },
     do: function(trick) {
          alert(trick + 'ing...');
     }
}).implement(Mammal, Pet);

// Instantiate it
var lassie = new Dog('Lassie');
lassie.move();
lassie.bark();
lassie.nurse(new Dog('Baby'));
lassie.do('fetch');
alert('My pets name is: ' + lassie.name);

How to obtain SOAP Request body in C# Web Services

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-933 dw-article type-dw-article status-publish has-post-thumbnail hentry category-code tag-c tag-net-framework tag-web-services">

Microsoft left something out when designing web services, fortunately there is a nifty way to obtain the original SOAP request within a C# web service.

I’ve written an article on this topic before. Its possible to obtain the SOAP request body for logging purposes by using SoapExtensions. And that’s all well and good if you want to log the traffic between your SOAP web service and the outside world. But what if you want to change the behaviour of your web services based on the input that comes in?

Say for example, you want to validate your SOAP request against an XML schema to enforce additional validation than what comes out of the box with .NET by default.

The process is quite simple, you need to find the Request object and load the contents of little known property called ‘InputStream‘. You can mine the contents of the SOAP request and load them into an XML document easily as follows:

Creating an ‘Echo’ Soap Request

Using this technique we can create a simple Web Service that performs a simple ‘Echo’ of whatever you send into it. See the following code:

using System;
using System.Collections.Generic;
using System.Web;
using System.Xml;
using System.IO;
using System.Text;
using System.Web.Services;
using System.Web.Services.Protocols;

namespace SoapRequestEcho
{
  [WebService(
  Namespace = "http://soap.request.echo.com/",
  Name = "SoapRequestEcho")]
  public class EchoWebService : WebService
  {
    [WebMethod(Description = "Echo Soap Request")]
    public XmlDocument EchoSoapRequest(int input)
    {
      // Initialize soap request XML
      XmlDocument xmlSoapRequest = new XmlDocument();
    
      // Get raw request body
      using (Stream receiveStream = HttpContext.Current.Request.InputStream)
      {
        // Move to begining of input stream and read
        receiveStream.Position = 0;
        using (StreamReader readStream = 
                               new StreamReader(receiveStream, Encoding.UTF8))
        {
          // Load into XML document
          xmlSoapRequest.Load(readStream);
        }
      }
      // Return
      return xmlSoapRequest;
    }
  }
}

Testing our Soap Request

We can quickly test our SOAP request and check that we are processing whatever XML is being sent in and its coming out the other side untouched.

Next Steps: Performing Schema Validation

Now you can do what you want with your xmlSoapRequest object. It’ll contain exactly the same request as was sent into SOAP in the first place.

If you are after schema validation.The next step is to populate the xmlSoapRequest.Schemas property and then fire off the xmlSoapRequest.Validate() method.

Piece of cake.

SQL XML Performance in High-Volume Databases

Warning: Use of undefined constant archives - assumed 'archives' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: Use of undefined constant page - assumed 'page' (this will throw an Error in a future version of PHP) in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36

Warning: A non-numeric value encountered in /home/desagquj/public_html/wp-content/themes/desalasworks.v1.6/archive.php on line 36
class="post-810 dw-article type-dw-article status-publish has-post-thumbnail hentry category-code tag-performance-optimization tag-sql-server tag-xml">

XML may be a drag, but you can use it within SQL to turn your database server into a high-performance love machine.

Now I know many of you will be wondering: XML, performance and high-volume in the same sentence? Surely you must have gone nuts!

I can promise you I haven’t gone nuts. While I agree that XML in the back-end is bulky, unruly, and often a cause for performance-degradation instead of good news you desperately want to hear, there is at least one place where it can make a difference for the better.

Stored Procedures and their Limitations

You see back when Stored Procedures for relational databases were first created, they quickly became the greatest thing since sliced bread (and boy were they an improvement over writing SQL Code directly into your application), however there was one little problem with Stored Procedures that remained unsolved for a long time. That is, SQL deals in RecordSets (i.e. Tables), it is the essence of the language, however the input possibilities for Stored Procedures were always pretty limited, being simple data types such as strings, numbers, and booleans. Until recently, there was no parameter of data type RecordSet so you couldn’t easily enter a list of things as input into a Stored Procedure.

You see most applications deal with many CRUD (Create, Read, Update, Delete), and out of those Stored Procedures can only output (Read) many records at a time. However the CUD part of it (Create, Update and Delete) had to be done one record at a time when using simple data inputs. It is a fact that for most applications it remains this way even today.

Sometimes developers come up with a workaround to enter a list of parameters

This has long been a bit of a problem, and many developers over the years have tried to come up with workarounds to this problem (like using a long list of pipe-separated values), but the solutions have ranged from the not-so-great to the lets-hold-our-breath-and-hope-it-doesnt-fail-spectacularly.

High Volume Inserts and Updates

Entering records one at a time is fine and dandy for most applications, however those requiring high-volume inserts and updates are severely constrained by this fact. You say why? Well, imagine you have an input data feed that needs to insert 10,000 records to a table, then return a message to say how things went. There are 2 ways to do this:

a) You split the records and perform 10,000 separate INSERT operations, or

b) You keep the records together perform a single INSERT operation with 10,000 records.

Which one do you think will perform faster?

Its a no-brainer really, calling a stored procedure once and performing a single INSERT operation will perform significantly faster (over 1000 times faster usually) than doing all the individual inserts one at a time, specially when you factor in network latency speeds between you application server and database server if you are repeating multiple procedure calls in the database.

Here I made a pretty picture so you get the idea:

I hope you made some coffee, this is going to take a while.

So if you plan to insert one record at a time, the other side will probably have to wait a few minutes or hours to get a response back from you. However if you perform the load as a single INSERT, you can probably get a message back to them within a few seconds.

Now your standard run-off-the-mill developer will say: “Hey, we can thread this out into 100 different concurrent calls to the database!” But the thing here is that the database server can only handle so many concurrent INSERT operations at a given time, not to mention that it might become unresponsive under the sudden overload and that you are using up a lot of unnecessary bandwidth in the form of additional calls coming both ways over the network. Ultimately there is a better solution than the hammer-it-harder approach.

XML Saves the Day

So how does XML feature into this discussion?

Well, you see SQL Server (And Oracle), have a handy XML data type that can ALSO BE USED AS INPUT into a Stored Procedure. This technique has been available as far back as SQL 2000, but many developers are not aware of it.

This way you can get a response back in a few seconds.

Its quite easy to strip out records from XML input. You can even perform XML Schema validation inside SQL Server but I’m not going to get into that today.

(I’ll follow up on this a bit later. Just gotta get some stuff done first)