=

desalasworks presents:

a portfolio of work by steven de salas

Newcross Healthcare

Overview

Steven spent 18 months working as Lead Engineer for HFA team. “Uber for healthcare workers”. Processing payments for thousands of health workers in both home and residential care setting. Focused on performance and stability fixes, CICD pipelines, upskilling new engineers, removing blockers, backend deployments and troubleshooting/extending system.

Technology Stack

Express, FeathersJS, Kafka, Sequelize ORM, Promises, Asyc/Await, ES2021, RESTful APIs, Jest, React, Babel, Webpack, Redis, RabbitMQ, SQL Server, DB Replication, Bitbucket, Github Actions, Monorepos, Docker, Kubernetes, Test Automation, GitOps, DevOps, Jenkins, Bash, AWS, EC2, S3, SQS, Serverless, Lambdas, CloudFront, CloudFormation, SNS, IAM, Load Balancing, SonarQube, SSL/TLS Certificates, New Relic Analytics. OAuth, Single Sign-On. Performance optimization. Debugging Production. Code Reviews and mentoring.

Key Deliverables

Steven joined Newcross at a critical time of change. The initial intention was for him to continue the work of previous engineers in a small team maintaining a highly impactful healthcare platform that helped to improve people’s lives and was very much in demand. However it quickly became evident that the tech debt accumulated over years made the platform hard to maintain, let alone grow the business.

Steven went from being a Senior Engineer in a team of 2, to interviewing and leading a team of 10 engineers and testers. All the while helping to mentor and upskill new hires, removing blockers, automate deployments, track down issues in production, improve the platform to make sure it was performant and reliable, and being responsible for cross-functional collaboration.

Feedback

Here is feedback on Linkedin.

Working with Steven was great. He joined Newcross at a critical time and quickly became an essential part of the backend team. He then took charge of our platform backend and made significant contributions towards performance improvements, build automation, bug fixes and setting up a strong engineering practice within the team. He also helped in laying the foundations of migrating Monolith to Microservices. Very happy to recommend him.

Esteban is one of the most inspiring engineers I’ve ever worked with. His “Sherlock Holmes”-esqe team presentations, where he tracked down the most elusive bugs, were legendary at Newcross and had people actually applauding! Incredibly smart and a genuine pleasure to work with, his superpower is inspiring and lifting the whole team around him. I can’t recommend Mr De Salas highly enough.

 

Markit Commission Manager

Commission Manager is a trade reconciliation tool enabling brokers and funds in the worlds largest financial institutions to settle their commission differences.

The Project

The project involved a standardized platform to aggregate and manage trade information, commission balances and vendor invoices and payments.

Functionality

Markit Commission Manager enables users to reconcile their trading commissions with multiple counterparties and then instruct those counterparties to pay for research and brokerage services – all from a single platform. This efficient workflow eases the administrative burden of managing multiple commission arrangements.

The functionality is centered around trade reconciliation, raising of invoices and managing balance differences.

  • Detailed reporting of balance, trade and invoice information
  • Efficient tools to manage and highlight trade breaks
  • Administrative configurations for new users, tolerance levels, and arrangements
  • Compliance tracking of interactions within the Commission Manager system

Business Partner Integration

In addition, it was necessary to provide systems integration with all broker-dealers that were involved in the project.

Comments on the Commission Manager platform

UserSofia Rossato – Head of Markit Research Manager at Markit

Markit Commission Manager is the latest addition to our Markit Research Manager range of services. Our objective is to enable investment firms to manage their entire research workflow – including sourcing research, tracking corporate access, voting on brokers and managing research commissions – from one single platform to bring greater efficiency and internal visibility to the whole process. Markit Commission Manager is the final piece in the jigsaw. We have partnered closely with BofA Merrill Lynch, Barclays Capital, Citi, Credit Suisse, Deutsche Bank, Goldman Sachs, J.P. Morgan and Morgan Stanley to ensure the platform meets the needs of the industry.

UserFrank Volino – Head of Global Commission Management Services at Citi

This platform is good news for buy-side and sell-side alike. It will allow the buy-side to use a standardised set of tools to manage their commission credits at multiple broker-dealers. We are very pleased to be part of this important industry initiative. This platform is in a strong position to become the industry standard.

Commission Manager in the News


07 Oct 2011
Markit launches commission management platform
http://www.bankingtech.com/bankingtech/article.do?articleid=20000213861


04 Oct 2011
Broker-Dealers Fund Markit’s New Commission Management Platform
http://www.securitiestechnologymonitor.com/news/markit-brokers-commission-29174-1.html

SQL XML Performance in High-Volume Databases

XML may be a drag, but you can use it within SQL to turn your database server into a high-performance love machine.

Now I know many of you will be wondering: XML, performance and high-volume in the same sentence? Surely you must have gone nuts!

I can promise you I haven’t gone nuts. While I agree that XML in the back-end is bulky, unruly, and often a cause for performance-degradation instead of good news you desperately want to hear, there is at least one place where it can make a difference for the better.

Stored Procedures and their Limitations

You see back when Stored Procedures for relational databases were first created, they quickly became the greatest thing since sliced bread (and boy were they an improvement over writing SQL Code directly into your application), however there was one little problem with Stored Procedures that remained unsolved for a long time. That is, SQL deals in RecordSets (i.e. Tables), it is the essence of the language, however the input possibilities for Stored Procedures were always pretty limited, being simple data types such as strings, numbers, and booleans. Until recently, there was no parameter of data type RecordSet so you couldn’t easily enter a list of things as input into a Stored Procedure.

You see most applications deal with many CRUD (Create, Read, Update, Delete), and out of those Stored Procedures can only output (Read) many records at a time. However the CUD part of it (Create, Update and Delete) had to be done one record at a time when using simple data inputs. It is a fact that for most applications it remains this way even today.

Sometimes developers come up with a workaround to enter a list of parameters

This has long been a bit of a problem, and many developers over the years have tried to come up with workarounds to this problem (like using a long list of pipe-separated values), but the solutions have ranged from the not-so-great to the lets-hold-our-breath-and-hope-it-doesnt-fail-spectacularly.

High Volume Inserts and Updates

Entering records one at a time is fine and dandy for most applications, however those requiring high-volume inserts and updates are severely constrained by this fact. You say why? Well, imagine you have an input data feed that needs to insert 10,000 records to a table, then return a message to say how things went. There are 2 ways to do this:

a) You split the records and perform 10,000 separate INSERT operations, or

b) You keep the records together perform a single INSERT operation with 10,000 records.

Which one do you think will perform faster?

Its a no-brainer really, calling a stored procedure once and performing a single INSERT operation will perform significantly faster (over 1000 times faster usually) than doing all the individual inserts one at a time, specially when you factor in network latency speeds between you application server and database server if you are repeating multiple procedure calls in the database.

Here I made a pretty picture so you get the idea:

I hope you made some coffee, this is going to take a while.

So if you plan to insert one record at a time, the other side will probably have to wait a few minutes or hours to get a response back from you. However if you perform the load as a single INSERT, you can probably get a message back to them within a few seconds.

Now your standard run-off-the-mill developer will say: “Hey, we can thread this out into 100 different concurrent calls to the database!” But the thing here is that the database server can only handle so many concurrent INSERT operations at a given time, not to mention that it might become unresponsive under the sudden overload and that you are using up a lot of unnecessary bandwidth in the form of additional calls coming both ways over the network. Ultimately there is a better solution than the hammer-it-harder approach.

XML Saves the Day

So how does XML feature into this discussion?

Well, you see SQL Server (And Oracle), have a handy XML data type that can ALSO BE USED AS INPUT into a Stored Procedure. This technique has been available as far back as SQL 2000, but many developers are not aware of it.

This way you can get a response back in a few seconds.

Its quite easy to strip out records from XML input. You can even perform XML Schema validation inside SQL Server but I’m not going to get into that today.

(I’ll follow up on this a bit later. Just gotta get some stuff done first)

Faster Web Applications with Indexed Views

A short introduction into ‘Indexed Views’ a really handy performance-improvement tool available in SQL Server.

I’ve generally tried to stay clear of using traditional (non-indexed) SQL Views as they severely hinder performance when building applications that query a large set of data.

Traditional SQL Views and the Problems they Cause

Here is what happens when you create a View on a large database: Typically you’ll want to see data from several tables aggregated into just the results you are looking for, and while it is true that this happens, the view is a virtual query that takes up no space so every query you make to the View will be passed on to the underlying tables. Worst of all, if you try to use View in one of your stored procedures, the view needs to be fully resolved to all underlying records even if you use a WHERE clause outside it to limit a subset of data, however the same does not happen if you get rid of the View and use the same SELECT query with a WHERE clause!

SQL Views are slow because a query affects every underlying table

SQL Views are slow because a query affects every underlying table

You can imagine that if you are trying to build a ‘dashboard’ on a web application that gives you some totals and gets hit every 2-3 seconds, that means that millions of rows will be traversed over and over again. This can be somehow mitigated with cached output on stored procedures but its still murder on the database.

Improving Performance with ‘Indexed Views’

Now here comes the exciting bit:

  • What if you could automatically store just the records you need to create your dashboard?

That is exactly what happens when you create an index in one of your views. The data becomes materialized to disk and the results you are after are available (ie. ‘cached’) without having to query the underlying tables every time you are after some data.

Indexed Views are faster because only the view itself gets queried.

Indexed Views are faster because only the view itself gets queried.

The Downside of Using Indexed Views

Be aware that here are a couple of drawbacks in using this type of construct.

  1. First, your underlying tables become ‘schema-bound’, this means that you can no longer get rid of them or change their structure (add an extra column for example) without dropping the view first.
  2. Second, any insert or update into the underlying tables will be slowed down because they cause a refresh of the indexed view. This means transactions involving INSERT, DELETE or UPDATE into these tables will ideally have to be batched (ie, try to avoid inserting/updating one row at a time, insert/update many rows at a time instead)

However, in my opinion, the drawbacks may be well worth it, as most applications involve many database reads and few database writes.

More about Indexed Views

Support for Indexed Views in other database systems.

Oracle 8i and upwards have Materialized Views which are a very similar feature, MySQL however is one of those database systems that do not support Materialized (or Indexed) Views.

If you want to have similar functionality in MySQL and you use Stored Procedures for inputting data into your database, you can enhance the Stored Procedures that update/insert data by running an extra calculation at the end of the procedure that updates a summary table which acts as your view. This is essentially doing the same thing as an Indexed View but keeping it updated manually.

Hope the explanation was useful.

Value Trader

Value Trader is a tool that calculates stock prices based on value fundamentals from Balance Sheet and Profit and Loss statements. Feel free to use it to determine the financial health and estimated value of your stocks.

Live Address

http://www.valuetrader.net

The Landing Page

The landing page provides concise information on a few selected companies based on a “Watchlist”. These are chosen by investors so that they feature every time they come and visit.

Company Details Page

When drilling down into an individual company. The system provides information by looking at financial data over the past few years to determine general health of the company, as well as provide recommendations based on price (by comparing it to the book value and earnings of the company).

Designed for iPhone

The design has been further enhanced to look good on mobile devices, using a mix of small and large type so the important information is clearly displayed even on a small screen.

IP Based Security

The website makes use of IP addresses to authenticate users. The following advantages:

  • Only need to authenticate once (per location).
  • User locations are traced by a collection of IP addresses associated to an email.
  • There are measures in place to detect SQL Injection, Denial of Service and Dictionary Attacks. If a single location tries to hack the site all associated IP addresses are automatically denied access.

AJAX database access in C# – The simple way

Today, I’m going to throw the Microsoft textbook out the window and show you a really easy way to get your database records into a JavaScript application. Minimal hassle – maximum bang for your buck.

First, I’m assuming you chose a JavaScript framework such as Ext JS, Dojo, Yahoo UI, JQuery or any other fine library for your front-end widgets. If so then congratulations, this article is just for you.

The trick is to leave the middle-tier c# layer as thin as possible, implementing only the things your client layer cant do reliably: Security and Data Access.

In this example I am only showing how to write minimal code for Data Access, Security is too long a topic for a single article.

READING FROM THE DATABASE

Say you have a database with products in it. For this example I am using the Northwind database:

 

Here is some C# code I wrote earlier to open up the database and read the first record. If you want some examples of valid connection strings you can look here and here.

Here is the output when you run this code in your browser:

The magic here happens in lines 32 and 38.

Line 32 uses DataTable.Load() to get the database contents into a .NET data table as follows:

32     table.Load(reader, LoadOption.Upsert);

Line 38 uses the DataTable.WriteXml() method to write the contents of the table in XML format as a HTTP response.

38     table.WriteXml(writer);

Now in order to go one step further, your AJAX application needs to read INDIVIDUAL records, that means one at a time, and show them to the user.

Here is some modifications I made to the earlier code for this purpose:

And if you run this code and insert a “?ID=8” at the end of your request (which you can easily append within javascript) you get the following result:

And thats it.

So where is the trick? Is that everything?

Ahh.. For those accustomed to programming ASP.NET I guess it comes as a bit of a surprise that it would be so easy to get XML formatted records out to the client layer.

Surely there has to be a catch somewhere? … A WCF service with implemented data contracts? An Object-Relational Mapping framework operating behind the scenes? Or at least a strongly typed Collection using Generics?

Nope, thats it. You can do this the hard way, but that’s not why you are reading this article. So now you can take off your C# hat and put on the JavaScript one because the rest of the logic goes on the client layer so that users can get the most of their UI experience.

SOURCE CODE

Here is the source code I used in this example, there are 2 files here: Product.aspx and Product.aspx.cs (code-behind) so I’ve zipped them up into this archive:

Product.aspx.zip

You are free to copy the code here but since you are not paying me for it I accept no liability if your site goes topsy-turvy. One more free tip, if you are going to put this into production you may want to use a Generic Handler (.ashx file) instead, its more light-weight and you don’t need all the functionality in the Page class.

Please note: I’ve left you the section “WRITING TO THE DATABASE” for a separate article.

ESPA Online Checkout

E-commerce website featuring relaxation and skincare products by UK Spa design and management company. The project involved client checkout and ordering process embedded in existing Content Management solution.

 

Development of The Checkout Process

  • Enabling taxes and currency assignment based on IP address geo-location
  • Assigning and applying promotional codes,
  • Ajax-based interaction and shopping cart review
  • Payment and warehouse integration
  • Order confirmation email
  • Integration into existing Sitefinity CMS templating.
  • As well as several other related features..

ESPA Checkout

The Checkout Process – Part 2

  • VAT calculations on last minute changes
  • User Experience – Ajax accordion UI features
  • Credit card detail verification
  • Payment integration
  • Warehouse order forwarding

ESPA Payment

Developing the Shopping Experience

  • Integrating the bespoke ASP.NET shopping cart into the existing website
  • Developing enhanced user-experience components such as wishlist for favourite products
  • Reviewing and testing complete process

ESPA Shopping Experience

Designing Email Feedback

  • Creating suitable email layouts based on design specifications
  • Testing on email clients such Gmail, MSN, Outlook, Yahoo etc
  • Integrating with existing order system after warehouse confirmation

ESPA Order Email

Tracking and Logging User Access

  • Desigining object and database model for logging and user/product tracking
  • Integrating model into existing website infrastructure
  • Testing high usage volumes
  • Producing email reports looking at statistics on number of orders, products purchased etc

Imperial College Service Desk

A web front end for existing Imperial College Service Desk software so that users can access the problem ticketing system and check status directly.

Designing the Landing Page

  • Contents of landing page are largely dictated by existing application
  • Marval API is used to link up with core CGI and database.
  • Users are able to create tickets or view existing requests.

Imperial College Service Desk

Template for Adding Incidents to the Database

  • Query information is largely based on existing database fields
  • Interaction via CGI into Marval Database
  • Once form completed, a new incident is flagged and forwarded to the Service Desk or the appropriate IT person.

Imperial College Service Desk - New Problem

Template for Viewing Current Incidents in the Database

  • Screen allows current users to read into existing Service Desk database.
  • Its possible to update current problems so as to notify relevant staff in charge of this problem.

Imperial College Service Desk - View Problem


Imperial College PC Shop

Online PC shop for Imperial College staff and departmental needs. Featuring a bespoke administration interface capable of expanding the product range.

Developing the Landing Page

  • Purchasing Process based on a 5 step “wizard” configuration to improve user experience
  • Project Developed in ASP Classic, Javascript and SQL Server

Imperial College PC Shop - Main

Completing the Purchasing Process (Confirm Order)

  • Pruchase process completed
  • Order Confirmation Email sent to user
  • Details of the order are stored and forwarded to procurement department

IC PC Shop Confirm Order

Designing the Administration Interface

  • Add New Products, Edit and Delete Existing Products.
  • Integrated into existing Imperial College web templates.
  • User Interface simple and easy to use.

Imperial College PC Shop - Admin

Nandos’ End of Day Uploads

A daily upload process carried out by each of Nandos’ stores that copies daily Point-of-Sale information into a central database repository for analysis.

Start of Process

  • Nightly process run by store manager
  • Launched as a popup with pass-though authentication (POST variables)
  • Pushes data from local Point-of-Sale system into central database

Nan

Nandos End of Day Completed

  • Wages Information, Cash Registry & Inventory File uploaded from local till system into central database
  • Data insert using BULK INSERT (TRANSACT-SQL)
  • Data parsing is performed on SQL server using stored procedures for faster execution.

Nandos - End of Day Completed