=

desalasworks presents:

a portfolio of work by steven de salas

Markit Commission Manager

Commission Manager is a trade reconciliation tool enabling brokers and funds in the worlds largest financial institutions to settle their commission differences.

The Project

The project involved a standardized platform to aggregate and manage trade information, commission balances and vendor invoices and payments.

Functionality

Markit Commission Manager enables users to reconcile their trading commissions with multiple counterparties and then instruct those counterparties to pay for research and brokerage services – all from a single platform. This efficient workflow eases the administrative burden of managing multiple commission arrangements.

The functionality is centered around trade reconciliation, raising of invoices and managing balance differences.

  • Detailed reporting of balance, trade and invoice information
  • Efficient tools to manage and highlight trade breaks
  • Administrative configurations for new users, tolerance levels, and arrangements
  • Compliance tracking of interactions within the Commission Manager system

Business Partner Integration

In addition, it was necessary to provide systems integration with all broker-dealers that were involved in the project.

Comments on the Commission Manager platform

UserSofia Rossato – Head of Markit Research Manager at Markit

Markit Commission Manager is the latest addition to our Markit Research Manager range of services. Our objective is to enable investment firms to manage their entire research workflow – including sourcing research, tracking corporate access, voting on brokers and managing research commissions – from one single platform to bring greater efficiency and internal visibility to the whole process. Markit Commission Manager is the final piece in the jigsaw. We have partnered closely with BofA Merrill Lynch, Barclays Capital, Citi, Credit Suisse, Deutsche Bank, Goldman Sachs, J.P. Morgan and Morgan Stanley to ensure the platform meets the needs of the industry.

UserFrank Volino – Head of Global Commission Management Services at Citi

This platform is good news for buy-side and sell-side alike. It will allow the buy-side to use a standardised set of tools to manage their commission credits at multiple broker-dealers. We are very pleased to be part of this important industry initiative. This platform is in a strong position to become the industry standard.

Commission Manager in the News


07 Oct 2011
Markit launches commission management platform
http://www.bankingtech.com/bankingtech/article.do?articleid=20000213861


04 Oct 2011
Broker-Dealers Fund Markit’s New Commission Management Platform
http://www.securitiestechnologymonitor.com/news/markit-brokers-commission-29174-1.html

SQL XML Performance in High-Volume Databases

XML may be a drag, but you can use it within SQL to turn your database server into a high-performance love machine.

Now I know many of you will be wondering: XML, performance and high-volume in the same sentence? Surely you must have gone nuts!

I can promise you I haven’t gone nuts. While I agree that XML in the back-end is bulky, unruly, and often a cause for performance-degradation instead of good news you desperately want to hear, there is at least one place where it can make a difference for the better.

Stored Procedures and their Limitations

You see back when Stored Procedures for relational databases were first created, they quickly became the greatest thing since sliced bread (and boy were they an improvement over writing SQL Code directly into your application), however there was one little problem with Stored Procedures that remained unsolved for a long time. That is, SQL deals in RecordSets (i.e. Tables), it is the essence of the language, however the input possibilities for Stored Procedures were always pretty limited, being simple data types such as strings, numbers, and booleans. Until recently, there was no parameter of data type RecordSet so you couldn’t easily enter a list of things as input into a Stored Procedure.

You see most applications deal with many CRUD (Create, Read, Update, Delete), and out of those Stored Procedures can only output (Read) many records at a time. However the CUD part of it (Create, Update and Delete) had to be done one record at a time when using simple data inputs. It is a fact that for most applications it remains this way even today.

Sometimes developers come up with a workaround to enter a list of parameters

This has long been a bit of a problem, and many developers over the years have tried to come up with workarounds to this problem (like using a long list of pipe-separated values), but the solutions have ranged from the not-so-great to the lets-hold-our-breath-and-hope-it-doesnt-fail-spectacularly.

High Volume Inserts and Updates

Entering records one at a time is fine and dandy for most applications, however those requiring high-volume inserts and updates are severely constrained by this fact. You say why? Well, imagine you have an input data feed that needs to insert 10,000 records to a table, then return a message to say how things went. There are 2 ways to do this:

a) You split the records and perform 10,000 separate INSERT operations, or

b) You keep the records together perform a single INSERT operation with 10,000 records.

Which one do you think will perform faster?

Its a no-brainer really, calling a stored procedure once and performing a single INSERT operation will perform significantly faster (over 1000 times faster usually) than doing all the individual inserts one at a time, specially when you factor in network latency speeds between you application server and database server if you are repeating multiple procedure calls in the database.

Here I made a pretty picture so you get the idea:

I hope you made some coffee, this is going to take a while.

So if you plan to insert one record at a time, the other side will probably have to wait a few minutes or hours to get a response back from you. However if you perform the load as a single INSERT, you can probably get a message back to them within a few seconds.

Now your standard run-off-the-mill developer will say: “Hey, we can thread this out into 100 different concurrent calls to the database!” But the thing here is that the database server can only handle so many concurrent INSERT operations at a given time, not to mention that it might become unresponsive under the sudden overload and that you are using up a lot of unnecessary bandwidth in the form of additional calls coming both ways over the network. Ultimately there is a better solution than the hammer-it-harder approach.

XML Saves the Day

So how does XML feature into this discussion?

Well, you see SQL Server (And Oracle), have a handy XML data type that can ALSO BE USED AS INPUT into a Stored Procedure. This technique has been available as far back as SQL 2000, but many developers are not aware of it.

This way you can get a response back in a few seconds.

Its quite easy to strip out records from XML input. You can even perform XML Schema validation inside SQL Server but I’m not going to get into that today.

(I’ll follow up on this a bit later. Just gotta get some stuff done first)

AJAX database access in C# – The simple way

Today, I’m going to throw the Microsoft textbook out the window and show you a really easy way to get your database records into a JavaScript application. Minimal hassle – maximum bang for your buck.

First, I’m assuming you chose a JavaScript framework such as Ext JS, Dojo, Yahoo UI, JQuery or any other fine library for your front-end widgets. If so then congratulations, this article is just for you.

The trick is to leave the middle-tier c# layer as thin as possible, implementing only the things your client layer cant do reliably: Security and Data Access.

In this example I am only showing how to write minimal code for Data Access, Security is too long a topic for a single article.

READING FROM THE DATABASE

Say you have a database with products in it. For this example I am using the Northwind database:

 

Here is some C# code I wrote earlier to open up the database and read the first record. If you want some examples of valid connection strings you can look here and here.

Here is the output when you run this code in your browser:

The magic here happens in lines 32 and 38.

Line 32 uses DataTable.Load() to get the database contents into a .NET data table as follows:

32     table.Load(reader, LoadOption.Upsert);

Line 38 uses the DataTable.WriteXml() method to write the contents of the table in XML format as a HTTP response.

38     table.WriteXml(writer);

Now in order to go one step further, your AJAX application needs to read INDIVIDUAL records, that means one at a time, and show them to the user.

Here is some modifications I made to the earlier code for this purpose:

And if you run this code and insert a “?ID=8” at the end of your request (which you can easily append within javascript) you get the following result:

And thats it.

So where is the trick? Is that everything?

Ahh.. For those accustomed to programming ASP.NET I guess it comes as a bit of a surprise that it would be so easy to get XML formatted records out to the client layer.

Surely there has to be a catch somewhere? … A WCF service with implemented data contracts? An Object-Relational Mapping framework operating behind the scenes? Or at least a strongly typed Collection using Generics?

Nope, thats it. You can do this the hard way, but that’s not why you are reading this article. So now you can take off your C# hat and put on the JavaScript one because the rest of the logic goes on the client layer so that users can get the most of their UI experience.

SOURCE CODE

Here is the source code I used in this example, there are 2 files here: Product.aspx and Product.aspx.cs (code-behind) so I’ve zipped them up into this archive:

Product.aspx.zip

You are free to copy the code here but since you are not paying me for it I accept no liability if your site goes topsy-turvy. One more free tip, if you are going to put this into production you may want to use a Generic Handler (.ashx file) instead, its more light-weight and you don’t need all the functionality in the Page class.

Please note: I’ve left you the section “WRITING TO THE DATABASE” for a separate article.

Nandos’ Intranet

A central point of contact for all Nandos’ stores. From the central login page each user has a set of applications which they can launch automatically by just clicking on the icon.

Designing the Front Page Layout

  • 3-Column template using highly-colourful branding as per Nandos’ image.
  • Content is based on user profile with 3 types of users: Store Login, Area Manager Login, IT Login
  • Icon-Based navigation used for launching applications.
  • “Pass-though” authentication, user login parameters are forwarded to each application using HTTP POST
  • Login information and Noticeboard.

Nandos Intranet Point of Entry

Designing Application Layout for a Store

  • Sample screen for a Nandos Store Login
  • Menu contains customised options
  • Key item is “End Of Day” button that stands sepparate from the rest

Nandos Store Login

Putting it all together with a Content Management System

  • Site integrated into ActiveWeb CMS writte in ASP.NET
  • ASP.NET C# and XML/XSLT based templates to provide flexibility
  • CMS can add new application for users to launch as needed.