The Power of Bower

Recently I started doing a lot of frontend web development. By heart I’m a .net developer and Nuget has been a tool in my toolbox for a long time. Lately I’ve been implementing a lot of frontend with a mix of asp.net mvc projects and AngularJs projects. One thing I noted is that every time I come across an external frontend library, there always is a reference to bower.io. I never really gave this much thoughts, but a few weeks ago I wanted to use the angular-ui calendar component, an angular port of Fullcalendar.

I went to nuget as my current primary source at that time to find it, but no luck. Then a jump to GitHub where I found the source and made a clone…but …but this is no fun and not wasy to automate in a team, I just wanted a simple way to use it, and not have to download/clone it manually. I went back to the angular-ui page and found the reference to Bower.io, a package manager for the web.

After a very easy installation, just by following the guide on the homepage, I was able to make a bower search

bower search angular-ui-calendar

and a bower install

bower install angular-ui-calendar --save

There it was, the library I needed. So after a few how do I… I suddenly found a tool I would have loved to know a long time ago. The latter command makes the actually download of the library and all of its dependencies, and outputs it by default into a folder named bower_components, and places a reference in a bower.json file. Just as Nuget restore, bower can restore components referenced in the bower.json file. This means that on another developer machine where you clone the project, you just executes the command bower install after you’ve cloned the project and bower will ensure that you get all the dependencies you need to run the project.

So how much Bower

Now, it’s not like everything is just bower now, but for all my frontend development it is! The power of bower is that it is used by all web development communities and not just those related to .net, as I guess is the main community for Nuget usage. This means that my use of Nuget is now focused on everything not web-frontend related. I don’t have to wait for nuget packages to be created or make my own based on a git repository as another option, because they are available on bower. I would really recommend you try it out if you haven’t already. By now I’ve tested a few bower/visual studio integrations, but quickly went back to the core bower and a command prompt. It’s so simple and it works.

 

 

Posted in Uncategorized | Tagged , , , | Leave a comment

Goto conference – The marketing

Another year, another goto conference….and then again not. This is not my first time attending the goto conference, neither is it my first time as a goto blogger. But this year the conference is something very new and special to me. It’s about two months since I started a new career as an independent software consultant with my own company ByPassion.

Not only has my job situation changed, but the conference has also changed a bit. The previous years it was 3 days and this year it’s “only” 2 days. Looking at the schedule and comparing it with the previous years schedule, it dosen’t seem to have any impact on the quality of top professional speakers, they are there!

So as I’m now representing my my self and my own company I’ll have a special focus on networking and marketing this year, introduce my company to people I don’t know yet and catch up with the people I already know, who knows it might be the place where I meet my next client :).

Based on the current schedule I plan to attend and blog from the following sessions:

Xamarin.Forms by James Montemagno

I’ve been following everything about xamarin for a long time and with a speaker like James Montemagno attending the conference, this talk is a must see for every .net mobile developer.

Event Sourcing by Greg Young

A highly recommendable session. I’ve seen other talks by Greg and he is a must see speaker. The topic is very interesting and something we should all do more about with the systems we build.

The future of ASP.NET web tooling by Mads Kristensen

At the moment a lot is happening towards ASPNET vNext, and who else to give a presentation about tooling and aspnet than Mads Kristensen, the guy behind Web Essentials. As aspnet is part of my everyday work I hope to learn a couple of new tips and tricks from this session

The second day of the conference will contain keywords like Swift, AngularJS and ElasticSearch . Of course this won’t cover the whole conference and as the conference gets closer I might change a session or two in my schedule.

 

 

 

Posted in Uncategorized | Tagged , | Leave a comment

Webjobs custom naming with INameResolver

With the latest release of azure webjobs sdk it is also supported to have custom name resolving of triggers and output. Before the only option was to make it explicit in the code. With the new name resolving it’s now possible to differente names across deployment environments etc.

The magic starts with the new JobHostConfiguration and the interface INameResolver

static void Main(string[] args)
{
    var config =
        new JobHostConfiguration(
            "DefaultEndpointsProtocol=http;AccountName=YourAccountNameHere;AccountKey=YourKeyHere")
        {
            NameResolver = new QueueNameResolver()
        };
    var host = new JobHost(config);
    host.RunAndBlock();
}

Compared to the previous release where the JobHost got the connection string as a parameter. The benefit of this change is that we can now replace the default NameResolver by a property. Here I added my own QueueNameResolver which looks inside the configuration, but this is not a limitation just an option :).

public class QueueNameResolver : INameResolver
{
    public string Resolve(string name)
    {
        return CloudConfigurationManager.GetSetting(name);

        //or whatever logic you want here to resolve
    }
}

To trigger the new name resolver, simply change the signature of your webjob handler method to include a %key% as the argument.

void JobHandler([QueueTrigger("%signupQueueKey%")] Customer newCustomer, [Blob("verifiedcustomers/{Email}.json")] out VerifiedCustomer customer)

If we run the job and break at the Resolve method inside our custom resolver, we will now see the key we provided as the argument.
inameresolver

Posted in Uncategorized | Tagged , , | 2 Comments

Azure webjobs SDK basic parameter binding

In my previous blog post I write about how to use logging with the most basic queing mechanisme in Azure webjob sdk, where the message in the queue is just a string.

void Handler([QueueInput("signupqueue")] string customername)
{
    // magic goes here...
}

A small update on the latest webjobs sdk 0.3.0-beta-preview

Since I write the blog post new preview of the webjobs sdk has been released and beside the crazy MS versioning it just keeps getting better and better. In relation to the previous blogpost, the input and output parameters has changed a bit. Now the above would be written as:

void Handler([QueueTrigger("signupqueue")] string customername, Queue("confirmedcustomers") customer)
{
    // magic goes here...
}

queueinput has been changed to queuetrigger and queueoutput has been changed to just queue.

Back to the parameter naming

In the previous solution the input was a simple string which is a good for small simple solutions, but in most solutions we tend to have more enriched data like a customer, and order etc. Let’s make a sample where users sign up on a form, an everyday web scenario. They enter data in a form and submit it to a server. Now instead of just making all the business logic directly at the endpoint, we could instead push it to a queue, and let a well crafted handler process the request data. Doing it this way we would be able to return fast to the user and make more decoupled and scalable code by having seperate handlers for specific tasks, as in this case a sign up process.

To do this we need to change the signature of the handler just a bit

void Handler([QueueTrigger("signupqueue")] Customer newCustomer)
{
     // more magic here
}

And add the class so the binding mechnism knows the format of the input type Customer.

public class Customer
{
      public string Name { get; set; }
      public string Email { get; set; }
}

If we by now go to the server tools and browse the azure queue the webjob is using, we can add (1/2) a simple json snippet (3) to verify the binding is working as expected. This json will now be our sign up message in the format we are as input from the queue. When we run the code, and break on the handler, we can see the webjob binding mechanism in action, as our customer now holds the values form the message, pretty neat and familar if you have been using ASPNET MVC etc.

addToQueue

Now let’s take the sample a bit further. Let’s say the result of the handler should be streamed to a blob inside our Azure storage. The blob should be stored in a container named verifiedcustomers in a json file named by the customers email.

First we need to change the signature. We add a second parameter to the handler, the output parameter.

[Blob("verifiedcustomers/{Email}.json")] out VerifiedCustomer customer

We define the output as a blob with the name of the container and then a placeholder {Email}. This is a reference to the Email property on the input parameter Customer . Finally we make a suffix to indicate it’s a json file. Now there is a final change, because our verified customers also holds a verified date, we model them as a VerifiedCustomer. The final handler signature wola!

NewCustomerEmailHandler([QueueTrigger("signupqueue")] Customer newCustomer, [Blob("verifiedcustomers/{Email}.json")] out VerifiedCustomer customer)

Custom binding…

If you run the job by now and add a message to the queue, the result will be an IndexException with the message “Can’t bind to parameter VerifiedCustomer….”. This happens because the instance of VerifiedCustomer is being streamed to container, and as it is by now there is no default serialization to handle this. Therefore we need to support for this, by implementing the interface ICloudBlobStreamBinder and implement its writer and reader method. A simple implementation can be made using json.net

public class VerifiedCustomer : ICloudBlobStreamBinder<VerifiedCustomer>
{
    public string Name { get; set; }
    public string Email { get; set; }
    public DateTime Verified { get; set; }

    public VerifiedCustomer ReadFromStream(Stream input)
    {

        StreamReader reader = new StreamReader(input);
        var data = reader.ReadLine();
        return JsonConvert.DeserializeObject<VerifiedCustomer>(data);
    }

    public void WriteToStream(VerifiedCustomer customer, Stream output)
    {
        var data = JsonConvert.SerializeObject(customer);

        using (StreamWriter writer = new StreamWriter(output))
        {
            writer.Write(data);
            writer.Flush();
        }
    }
}

If we rerun the code and add a new customer to the signup queue, the result will be a new blob in our blobstorage container

blob_in_container

 

 

Posted in Uncategorized | Tagged , , | Leave a comment

Hosting Azure webjobs outside Azure, with the logging benefits from an Azure hosted webjob

First there is the Azure Webjob SDK

A little after Azure Webjobs was first introduced, the AzureWeb job SDK was also introduced, making it possible to trigger a job by an “event” like a message in a queue, a new blob etc. This makes it a lot more frictionless and clean to make a job being triggered by some event, compared to implement the trigger yourself inside the job with with polling.

The basic job

First we need to create a regular console application using Visual Studio and install the nuget package Microsoft.WindowsAzure.Jobs.host , to enable the webjob sdk features. Please not that that this package is currently in prelease so the command to nuget should include the prerelease option:

install-package microsoft.windowsazure.jobs.host -Prerelease

With this package, we can create our worker method. For simplicity we create a simple method that gets triggered by a message in a queue and log the input.

public static void Capture([QueueInput("orderqueue")] string orderMessage)
{
     Console.WriteLine(orderMessage);
     //real work goes here.....
}

This tells the webjob SDK to look for a message in a queue named orderqueue in your storage account, so we also need to specify the storage account details.

static void Main(string[] args)
{
    var host = new JobHost("DefaultEndpointsProtocol=http;AccountName=YourAccountNameHere;AccountKey=YourAccountKeyHere");
    host.RunAndBlock();
}

Specify a new JobHost and start it inside the main method. You need to replace  [AccountName] and [AccountKey] with your own credentials. This will enable you to reference the orderqueue used in the worker method. All the wiring is done “automatically” by the webjob SDK.

Hosting the WebJob outside the Website webjobs

Under normal circumstances you would host your webjob as part of an azure website. The reason I want to make additional hosting outside the azure environment, is because I’m using a library in a another projec that’s referencing GDI, and GDI is not supported by Azure webjobs yet. So a great opportunity to show additional webjob hosting.

Deploy and start the webjob

The deployment is so simple, just copy your webjob to a VM or run it directly on your own machine. Start the job as you would with a regular console application.

Linking to an Azure website to enable logging

To enable logging for the webjob, we need to create an Azure website as we would for normal hosting. But the trick is that inside the Configure section we add a connectionstring for the AzureJobsRuntime. If you are only viewing logs as illustrated below by Visual Studio, this step can be omitted.

 

Logging from the VM to Azure Website

Finally it’s time to show it all in action. From within Visual Studio server explorer, create a new queue with the name we used earlier orderqueue.

serverexplorer_queue

Open the queue and “add new message”. We can do this as we are using a basic string for input, so just wite “Hello Log” and press ok, the message is now in the queue.

If not started, then start the webjob. The job will soon pick up the message you added to the queue, illustrated as this in the cnsole.

capture_message

To view the content of the message, open the Server Explorer and open the Blob node under your storage account.

storage_log

The log should now contain 2 entries. Click the largest one and you should now see your message content statement inside.

job_log

 

notepad_log_msg

So this is all it takes to enable logging on a webjob hosted outside an Azure website.

 

Posted in Uncategorized | Tagged , , | Leave a comment

How To Use Google Analytics From C# With OAuth

logo Many people knows about and uses Google Analytics for their websites and apps to collect user behavior and other useful statistics. Most of the time people also uses the collected information directly from the Google Analytics website. On one of the recent project I’ve been working on we needed to extract data from Google Analytics running a C# service with NO user interaction aka a background service. This blog post is a summary of the roadblocks I stumbled upon to get the job done. All code depends on the Google Analytics V3 API and the source code for this post is available on GitHub.

Setting up the C# project and adding Google Analytics dependencies

For the purpose of this sample we will use a standard C# console project, note that I’m running Visual Studio 2013 and .NET 4.5, This will have an impact described later. When the project is created we need to include the Google Analytics nuget packages. The package is currently i beta, but I haven’t had any issues using it (The Google Analytics v3 is not in beta). From the Visual Studio Package Manager Console search for the Prerelease package named google.apis.analytics.v3. Using the following command

get-package -Filter google.apis.analytics.v3 -ListAvailable –prerelease

To install the package execute the following command

install-package google.apis.analytics.v3 –prerelease

There is quite a lot of dependencies, so I won’t post a screenshot. The important part is just to ensure that the –prerelease option is included. To proceed from here we need to get the needed keys from Google Analytics and setup OAuth Authentication.

Setting up Google Analytics to allows read access from a native service application

To extract data from Google Analytics, we need to make a project inside Google Analytics. This project is used to manage the access and keeping track of the request usage, which we will look at later. Go to the Google Developer Console and create a project. The project will be listed in the overview, here you can see my project named “My Project”ga_project_list Selecting the project after it is create displays the following screen where we need to enable the Analytics API. If you need access to other APIs later on, this is the place to go.enable_GA

Enabling OAuth access to the project

For this project we want to use OAuth for authentication. Google provides multiple ways of doing depending on the use case. From the Credentials screen select “Create New Client Id”.create_clientid Here it’s important to select  Installed application and other. Click Create Client Id, and we get back to the project overview with the new client id listed. Record both the Client ID and Client Secrect as we need both these later from the program. That’s it, now we are ready to start coding and access the API! client_id_list

Setting up AnalyticsService in C#

Going back to Visual Studio and the project we created earlier, insert the following code into the main of the program. Note that this is only sample code :).

class Program
 {
 static string clientId = "INSERT CLIENTID HERE";
 static string clientSecret = "INSERT CLIENTSECRET HERE";
 static string gaUser = "YOURACCOUNT@gmail.com";
 static string gaApplication = "My Project";
 static string oauthTokenFilestorage = "My Project Storage";

private static AnalyticsService analyticsService;

private static void Main(string[] args)
 {
 var credential = GoogleWebAuthorizationBroker.AuthorizeAsync(
 new ClientSecrets
 {
 ClientId = clientId,
 ClientSecret = clientSecret
 },
 new[] {AnalyticsService.Scope.AnalyticsReadonly},
 gaUser,
 CancellationToken.None,
 new FileDataStore(oauthTokenFilestorage)
 ).Result;

var service = new AnalyticsService(new BaseClientService.Initializer()
 {
 HttpClientInitializer = credential,
 ApplicationName = gaApplication
 });
 }
 }

What’s happening here is we create the AnalyticsService, which is the service we are going to use when sending requests to Google Analytics. We provide it with the informations we got earlier creating the project inside the Google Developer Console. Also note that for this purpose we are using the scope AnalyticsReadOnly. Finally we are giving an OauthTokenFileStorage. This is the storage that google will use to store the oauth token which are issued during each requests between Google Analytic and our application. On your machine this will be physically located at %AppData%StorageName where storage is the name we specified using OauthTokenFileStorage. It is possible to change this to your own implementation using a database etc. But that will be covered in a later blog post, for now we will just use the file storage.

Ready to Authenticate with Google Analytics

When the information in the source code is updated to match your own settings, we are ready to test that we can initialize the oauth authentication. As mentioned earlier we will run this service  with no user interaction. However the first time we need to run it, we need to do it “manually” meaning run it from a console. This is needed as we will be prompted to allow access and get the initial access_token but more over the refresh_token. The refresh token is the one used when the access token expires to renew it. When we run this the first time we will get the token information in a file inside the oauth storage specified by our app. From here on, we can copy it to where ever we want, both to our own machine of it could be to a server running our service. So the user interaction is only a one time thingy. If you change Account or Profile id (developemnt, staging and production etc.), you need to redo this step! Hit F5 and our application starts and….crash!.

Could not load file or assembly ‘Microsoft.Threading.Tasks.Extensions.Desktop, Version=1.0.16.0, Culture=neutral…….

Now what just happened? Because we are running .NET 4.5 things have changed a bit. If we do a little investigation (Thanks to Google and Stackoverflow) we can see that this file should be included by the Microsoft.BCL.Async  Nuget Package A quick look into the packages\Microsoft.Bcl.Async.1.0.165 unveils we have a later version and that the assembly is NOT included in the 4.5 version of the packagebclmismatch To ensure we can authenticate add a reference to the 4.0 file we are missing.Simply browse to the package you included and you’ll find both the 4.0 and 4.5 assemblies. Next up add an assembly binding redirect in the projects app.config file to redirect to the previous versionassemblybinding

Now with both the assembly reference and assemblybinding in place, run the project again. This time the project launches a browser where we are asked to allow this application to access to your Google Analytics data.

authgoogle

If we take a look into the OauthTokenFileStorage located in %appData%YourStorageName We should now see a file containing our token information accesstoken, refreshtoken and some other information. Remember that the most important part here is the refresh token, as this one is used to renew the access token, whenever it expires.

The 1st Request To Analytics Data

By now everything is ready and we can initiate the first request. Enough writing let’s go the code first way.

 string start = new DateTime(2014, 1, 1).ToString("yyyy-MM-dd");
            string end = new DateTime(2014, 1, 10).ToString("yyyy-MM-dd");
            var query = service.Data.Ga.Get(profileId, start, end, "ga:visitors");

            query.Dimensions = "ga:visitCount, ga:date, ga:visitorType";
            query.Filters = "ga:visitorType==New Visitor";
            query.SamplingLevel = DataResource.GaResource.GetRequest.SamplingLevelEnum.HIGHERPRECISION;

            var response = query.Execute();

            Console.WriteLine("Entries in result: {0}", response.TotalResults);
            Console.WriteLine("You had : {0} new visitors from {1} to {2}"
                , response.TotalsForAllResults.First(), start, end);
            Console.WriteLine("Has more data: {0}", response.NextLink == null);
            Console.WriteLine("Sample data: {0}", response.ContainsSampledData);

Building the request

For the purpose of this post we will just make a simple query that will “Get the new visitors to my blog and not the returning ones”. First up we will use the GET method of the AnalyticsService. This method takes a profileId, the date range we are requesting within and finally the metric we want to measure. The profileId is the unique id describing the “table” to the account we are requesting. To get the profileId, navigate to the administration section in your Google Analytics account. From here you click the “View settings” of the View column and you’ll see the ProfileId mentioned as View Id viewsettings profileIdWith the query created, we are able to define the dimensions (output) and additional filters to further narrow down the result set. For the purpose of this blog post I’ve added New visitors as a filter, to filter away Returning visitors. Before looking at the response result format, please take a look at the format of the input parameters. Both profileId, metric(s), dimension(s) and filter(s) are all prefixed with ga: This is mandatory and if not you will get really weird errors. For a detailed description take a look at the Core Reporting API – Reference Guide. Same goes for the date format.

Handling the response

Ones the request is ready we call Execute it to get the response. The result contains very detailed meta information besides the actual result. As you can see I’ve listed 4: TotalResults, TotalsForAllResults, NextLink and ContainsSampledData. TotalResults is a count of the entries contained in this response, TotalsForAllResults is a sum of all according to the metric. NextLink is an indication of whether we got all results in this response, and if not it points to the next result set. There is a limit of 1000 results per request, but the NextLink makes it very easy to navigate to the next result set. Finally the ContainsSampledData is an indication of whether the result is sampled or not. To get a more precise extract you should limit the request date range and also set the SamplingLevel of the request to HighPrecision.

Debugging requests

While building the requests the Google API Explorer comes in very handy. Here you can test all your input and investigate the request and its details.

At the end of the road there is a stop sign

My final note on the basic request setup is a heads up to the default request Limits and Quota defined by Google. Please get a good understanding of where your limits are, both in number of requests and frequency. And when you exceed them as I did this Error Responses is a good place to start implementing a more solid foundation. To follow your request usage of the current project/profile, use the Old Console Monitor access able from the Developer Consoleaccess_monitor Click the APIs, select the Analytics API we enabled earlier and you’ll see the menu to access both the quota monitor and the Report. The Report is the access to monitor your current and previous usage and the quota will enable you to increase request frequency etc.monitor  So far the new Cloud console (see the link in the screenshot) doesn’t seem to monitor correctly and the update frequency is very low. For now use the “old” one, where you can also download a request report on a day level see failed requests etc. I hope this demystifies and guides to a better understanding of the basics to access Google Analytics from C#. All source code is available on GitHub

Posted in Uncategorized | Tagged , | 2 Comments

Execute Book

Ok I’ll admit, one of my challenges as a developer is to execute. Often I get very curious in new technologies, which might harm the execution of the product I’m working on. I’m aware of it and after reading this great book I really got inspired and got some take aways to improve my self.

This great book by Drew Wilson & Josh Long is written in only 8 days! It’s about a product build within only 8 days SpaceBox.io. I read a tweet about the book, watched the video and quickly identified my self :). I bought the book and 1½ week later I’ve read it from start to end, very easy to read and a loy all the way.

I really like how the book is built and you can almost feel the energy of the writer as you read along. It might be easier for a software entrepreneur to identify with the content. You will get a lot of input to how and why to forget your old habits and start to execute. Do the thing you feel and do it now all you need is focus. There isen’t always a need to make the big plans as this will often be a roadblock towards shipping what you’re building no one knows the future and you can’t plan what your product will be inthe future. Ship and get feedback.

I think there is more than one target group for this book. First there is the entrepreneurs who wants to create and ship products, but the book might as well be an inspiration in software companies building projects for clients. The latter part of the book has some great perspective on the use of technology and how to get you moving forward. Interesting enough the role of the “builder” in the book could be named a fullstack developer, one that cares and can work en every phase of a products lifecycle. You will learn as you go along and features should be what drives your learning.

I really enjoyed reading the book and would recommend it to everyone who’s interested building products and ship’em.

Buy the book here. http://executebook.com/

Posted in Uncategorized | Tagged | Leave a comment

The $100 Startup – value for every cent

100startup

For some time I’ve been reading this fantastic book by Chris Guillebeau, finally finished and here are my thoughts.

I originally found the book based one some references at Amazon, and thought of it as a fast read and an energy kick. I wasen’t let down.
Overall the book describesa lot of startups in different business area, all started based on a very little amount of money, the $100. The idea behind most of them has not been to create a million dollar business, but to follow a passion for each of the people behind and change their way of living. One remarkable thing is that it is not yet another Facebook story about a group of IT students. It’s stories among all level of society from the homeless, to the manager who just love to travel, and the mom who wants to spend more time with her children and family. Follow your dreams and trust in it.
A big part of the book is written as guides, also available on the website . Easy to get going guides, the micro business plan, how to think about monitize your business and later how to increase your sale as just a few example. Exactly this part is what makes the book excellent and also makes it as a reference. I personally know that I will reread some of the chapters later again. The hints are so easy to understand and bring to live for every one. While reading the book I had to take a break from time to time, because I started to think! Think about all the inputs I got from the book and started to see if I could use them in some way for my own ideas of a business. I would recommend this book to anyone interested in entrepreneurship and small businesses, you wont be let down.

Find the book here

The books homepage 100startup

Posted in Uncategorized | Tagged | Leave a comment

The Attackers Are Here, don’t wait!

Don’t wait, when you deploy your online you are a subject to attack. Web security as one of the main topics at GOTO today, and could very well be explained by this very simple comic

First up Aaron Bedra did a great talk talk about some of the defence mechanism you can use to protect your online services and especially how to look for anomalies in the requests you receive. He demostrated ModSecurity a rule based web application firewall. But ModSecurity can not stand alone, instead it should be used together with Repsheet etc.

Repsheet is a collection of tools to help improve awareness of robots and bad actors visiting your web applications. It is primarily an Apache web server module with a couple of addons that help organize and aggregate the data that it collects.

Aaron demostrated different indicators to look for in incoming requests, and as indicators to fraud etc. stacked up in the request, the chance for the request being an attempt to attack increases. Aaron really made his point and demostrated som great examples. The RepSheet really seems like a great product an could be a great addon to PaaS providers like Azure.

The second part of the web security sessions was faced towards the possible attacks one could make to an online service. The session was presented by Niall Merrigan and covered the items in the OWASP top 10. As he started to state

Developers don’t write secure code because we trust, are lazy and security is hard

And this sound just about right, but as he pointed out example by example, we might consider to understand just a bit more about security and put an effort into it. Very simple he googled for web.config and got access to some running services’ web.config files, ready to be downloaded. Later he pointed out that IIS would protect you from having your web.config file downloaded. But with a simple trick, there is a basic workaround and suddenly your web.config information like connectionstrings etc. are public and could easily be exploited.

No clear text passwords!!

Of course you could hash your passwords, but if one already got your web.config file, he also got your machinekey and suddenly the world is not that secure any more. One of the more basic but still very common issue SQL injection Niall asked a very basic question, why we often use the same connectionstring for both read and write operations. Why do you need write permissions to do a search? Very good question. Niall made a lot to think about and I think it was a wakeup call for most of the audience, to start thinking more about the solutions we make in our daily work.

Posted in Uncategorized | Tagged , , | Leave a comment

Continuous Brain Food Delivery

What makes a car drive? The motor.

What makes the motor run? Fuel

What do you do you do to keep the motor running? You continuously refuel the car

Now you might ask how this fit into a Development conference like GOTO;

What makes you learn? Your brain receives input

What makes the brain receive inputs? Energy

How do you get energy? You continously ensures to get the correct and right amount of food.

So what’s all this about? I’m sad to say but the food level at the GOTO conference is just way below average, for a conference at this price level. All participants pays quite a lot of money for the ticket and expect to get value for money. But if the brain is not running all day long you will not get all the output from the sessions, which you should for a conference with great sessions like this.

After having been to a couple of conferences for the last couple of years, I’ve seen different approaches of how to ensure we all get food and the right kind of food. Some uses prepacked launch boxes to reduce the food queue. Others ensures that there is always food available to keep you going and not have any peak situations in between sessions, continuous Food Delivery. Then there is GOTO, where you have ONE meal during a whole day of the conference and a total minimum of snacks during the breaks. You might argue we’re developers and we could live of only coffee and energy drinks, well time to  get back to reality. When you finally head in the direction to have launch, you’ll properly end up in a queue just to wait wait and wait, that’s not good enough.

Just as developing software where we try to learn the domain, the organisation behind GOTO, should start taking this topic serious, as this is THE major, but very important pitfall IMO. Location is great, the audience is great the speakers too, but the food really sucks. I really hope to see more continuous food delivery, just like NDC Oslo etc. It really works. At NDC you have food at all hours of the day and whenever you feel like, you can eat and your brain never shutsdown because of low energy.

Just to illustrate the difference betweeen GOTO and another development conference in Scandinavia.

IMG_1016 goto food queue

Posted in Uncategorized | Tagged , | Leave a comment