Wednesday 11 March 2020

DaggerTech.Data finally converted

It has been a long road converting my C# ORM to Go, but it is finally done (although I do need to clean up the code)
Unfortunately, my day job got in the way of working on this, but I still gotta eat.
Anyway, the ORM (called GoDagger), like DaggerTech.Data works by creating the tables when it needs them, so no initial setup required. This does mean that any application using it can start up faster, but it can have a minimal performance hit in the early stages of execution.
The programmer is completely shielded from the SQL of the underlying database, allowing him, or her, to focus directly on the logic.
Each table is given 4 fields:
        ID                         VARCHAR(36) NOT NULL PRIMARY KEY
        CreateDate         BIGINT NOT NULL
        LastUpdate         BIGINT NOT NULL
        DeleteDate        BIGINT NULL
As you can see, I am using numeric values for dates. This allows me to port GoDagger to other databases without the need for any other conversion of date values.
The Model struct in Go to match to this is as follows:
type Model struct {
        ID                        *string
        CreateDate        time.Time
        LastUpdate        time.Time
        DeleteDate        *time.Time
}
As you can see in Model, both ID and DeleteDate are pointers. This allows me to have nil values in those fields. We use a nil value in ID to indicate if the record has ever been saved and the DeleteDate is nil if the record has not been deleted. The reason for the DeleteDate is that in some systems, it is better to disable a record rather than delete it. By default, GoDagger will not retrieve any record that has a DeleteDate, but this can be overridden.
To create a new model, we use composition of the Model struct, plus the use of struct tags:
type TestModel struct {
        godagger.Model
        Age         int                 `dagger:""`
        Name         string         `dagger:"size:20"`
        Nickname        *int                 `dagger:”size:20”`
}
For this familiar with GORM, you will see a resemblance. This is purely coincidental as I had not looked at GORM until a few days ago. The big difference, though, is GORM will include fields by default, unless specified to be ignored. GoDagger, on the other hand, will only include fields with a ‘dagger’ struct tag.
This is a very lightweight ORM as it does not create links between models for you. That is up to the developer. This is how DaggerTech.Data works as a lot of the data is to be passed as JSON to web applications and if I were to allow automatic linking and loading, then there would be the potential for massive amounts of data to be passed, as well as infinite loops.
To save a model to the database, it is simply a case of calling the Save function in GoDagger. This will look at the ID field and determine if the record needs to be created or updated.
tm1 := &TestModel{
        Name: "Test1",
        Age: 42,
}
godagger.Save(tm1)
Note that the save method requires a pointer, as do all the functions in GoDagger.
Searching, deletion, and some other helper functions are also available, which I will detail in a future post.

Monday 27 January 2020

OOP & Reflection in Go??

Converting DaggerTech.Data to Go is proving to be more of a challenge that I realised.

After over twenty years of object oriented programming, I am having to unlearn a great deal to work the Go way. Whilst Go does have a form of object orientation, to do what the ORM library needs to do, I need to re-work a lot of the code to a non-OOP fashion.
The main issue I am facing, is that I am so used to inheritance. The ability to call a method of a base class, with that base method being able to view the properties of the derived class (when using reflection). Unfortunately, this is not possible with Go’s method of embedding structs. The answer is it use interfaces and have functions that do not have a receiver, but the interface as the parameter. Not sure if that makes sense, but I will illustrate:

The following code is how I would have performed the reflection in a normal OOP manner. In fact, DaggerTech.Data does this, but in the C# version:

package main

import (
    "fmt"
    "reflect"
)

type Base struct {
    Name string
}

type Derived struct {
    Base
    Age int
}

func (b Base) DoReflection() {
    t := reflect.TypeOf(b)
    for i := 0; i < t.NumField(); i++ {
        fmt.Println(t.Field(i).Name)
    }
}

func main() {
    b := Base{}
    b.DoReflection() // OK. Displays the field Name
    d := Derived{}
    d.DoReflection() // Not OK. Only displays the field Name
}

Result:

Base Name
Base Name

The problem with this is that the call to DoReflection is only dealing with the Base struct, even though the DoReflection is elevated when the Derived struct embeds it. What is actually happening, is the compiler sees d.DoReflection() and actually replaces it with d.Base.DoReflection(), therefore passing the Base as the receiver.

To get around this, we need to make the DoReflection method a function receiving an interface:

package main

package main

import (
    "fmt"
    "reflect"
)

type Base struct {
    Name string
}

type Derived struct {
    Base
    Age int
}

func DoReflection(b interface{}) {
    t := reflect.TypeOf(b)
    for i := 0; i < t.NumField(); i++ {
        fmt.Println(t.Name(), t.Field(i).Name)
    }
}

func main() {
    b := Base{}
    DoReflection(b) // OK. Displays the field Name
    d := Derived{}
    DoReflection(d) // Not OK. Displays Base and Age
}

Result:

Base Name
Derived Base
Derived Age

But now we have a different issue. When the Derived struct is inspected, it sees the Base struct and just reports that, not the ‘inherited’ fields. For this we need to employ a little recursion:

package main

import (
    "fmt"
    "reflect"
)

type Base struct {
    Name string
}

type Derived struct {
    Base
    Age int
}

func DoReflection(b interface{}) {
    t := reflect.TypeOf(b)
    v := reflect.ValueOf(b)
    for i := 0; i < t.NumField(); i++ {
        if v.Field(i).Kind() == reflect.Struct {
            DoReflection(v.Field(i).Interface())
        } else {
            fmt.Println(t.Name(), t.Field(i).Name)
        }
    }
}

func main() {
    b := Base{}
    DoReflection(b) // OK. Displays the field Name
    fmt.Println("") // Added an empty line for readability in the results
    d := Derived{}
    DoReflection(d) // OK. Displays Name and Age
}

Result:

Base Name

Base Name
Derived Age

As you can see in the results, we now have both of the fields from the Derived struct. As a bonus, we can even see in which struct the field was declared.

In the database library, I need to also get the field type and any tags that are applied to the fields, but that is trivial compared to re-working my brain for this. I also need to put in special conditioning for the Time struct (equivalent to the C# DateTime).

Friday 3 January 2020

New Project

Time to start a new project.

This one, without giving too much away, is a large scale project, for medium to ‘semi’-large enterprises.

It will be web based, allowing for a large number of simultaneous users, as well as being driven by an SQL database.

Having a number of certifications and qualifications in various languages, I have the opportunity to choose my development stream.

Firstly, I considered using Xojo, but the problem there is that my licence for Xojo has expired and as I am now using macOS Catalina, I can no longer use 32 bit applications. I also don’t really want to pay for the licence when there are free options available, so it is time for me to move on.

Next, we have PHP. I have a lot of experience with PHP and have developed a large number of web sites and applications using it. However, being a scripted language, it has inherent problems, primarily speed and the potential for vulnerabilities. On the plus side, PHP has a very large community and a vast number of libraries to call upon, but, it does need a web server and runtime to be deployed, before deploying the application.

My day job requires me to develop software written in dotnet, or more specifically, C#. This would fix the speed issue (to a point), and, if I use dotnet core, I wouldn’t need to pre-deploy a web server. Through my day job, I do have a certification from Microsoft for developing applications in C#, but there are a number of issues, primarily speed and the need to deploy the dotnet runtime, which is quite large, on the target system.

Each of the options mentioned above are available on all of the three main platforms: Windows, Linux and macOS. The next option is not available on Windows, which is why I have rejected this one also. Swift is known for being the primary language for creating iOS and macOS software, however, as it is now open source, it is just as easy to use it for Linux, and with some jiggery-pokery, Windows (but this is not straight forward). There are three main frameworks for creating web applications with Swift, but it feels like a bolt on, rather than a native solution.

This brings me to the final, and selected, solution: Go. I recently completed a course in development with Go, which doesn’t make me anything like an expert, but I feel I have enough understanding of the core syntax and key concepts to be able to utilise it for a production project. Go produces native binaries, so no runtimes required. No web server is required either as, like dotnet core, the product is an application that acts as the server as well. Finally, Go, being a new language, is designed for multi-core systems, this means that concurrent programming is the norm for it (Rust is similar in that respect, but I prefer the syntax of Go). On the down side, it is not a strictly object oriented language (although some would argue that the syntax allows for a form of OOP). This means that I need to learn ta new way to design and develop he software.

The first task at hand is to convert my database ORM package. About a year ago, I wrote a library for C#, called DaggerTech.Data which divorces nearly all of the database functionality from your application code, allowing you to focus on the application. The library was designed for use with Microsoft SQL Server.

I have also ported DaggerTech.Data to PHP, although this one is designed for MySQL and MariaDB.

For this project, I will be using Postgres as the database. I want to keep the costs to the customers down, so by using a truly open source database, it ticks all the boxes. SO it does mean that I need to rewrite DaggerTech.Data in Go (currently codenamed GoDagger), which is a bit of a challenge as the original was completely object oriented and it uses reflection to provide the functionality, but the documentation for Go is very comprehensive, so it shouldn’t be too bad.

Anyway, just an update to let you know what I’m up to these days. I will keep you updated with developments.