CodeCloudy

Azure | .Net | JQuery | Javascript | Umbraco

Azure Scheduler 101 – Introduction

Purpose: Run jobs on simple or complex recurring schedules

Windows Azure Scheduler allows you to invoke actions – such as calling HTTP/S endpoints or posting a message to a storage queue – on any schedule. With Scheduler, you create jobs in the cloud that reliably call services inside and outside of Windows Azure. You choose whether to run those jobs right away, on a recurring schedule, or at some point in the future.

 

Where can we use Scheduler?

  • SaaS apps can use Scheduler to invoke a web service that performs data management on a set schedule
  • Internal Uses (1st party):

    Process long running requests – Http/s requests will timeout when a response is not received within a certain amount of time. For complex requests, such as a series of SQL queries against a large database, posting a message to storage queues allows you to complete the work without building additional asynchronous logic into your web service.

    • Windows Azure Mobile Services powers its scheduled scripts feature with the Scheduler. Skype, XBOX Video also uses scheduler to schedule its tasks.
    • Another Windows Azure service uses the Scheduler to regularly invoke a web service that performs diagnostic log cleanup and data aggregation
  • Enable a service to be invoked when offline – Typically, a web service needs to be online at the time the Scheduler invokes the endpoint. With messages posted to storage queues, however, the service can be offline when the Scheduler sends the message and field the request when it later comes online.
  • Recurring application actions: As an example, a service may periodically get data from twitter and gather the data into a regular feed.
  • Daily maintenance: Web applications have needs such as daily pruning of logs, performing backups, and other maintenance tasks. An administrator may choose to backup the database at 1AM every day for the next 9 months, for example.

 

You will have a new section in the azure portal


 

If not, you can enable it using preview services


 

Portal Access High level view

Job collections: which lists available job collections which simple groups jobs.


Jobs: this section will show all the jobs which we can filter by job collection or status.


 

Next lets create a job > Next Post

Advertisements
Leave a comment »

Microsoft beats Amazon on Cloud Storage Prices

Microsoft has replaced Amazon to achieve the top performer position in the 2013 report of Nasuni Corporation.

“This year, our tests revealed that Microsoft Azure Blob Storage has taken a significant step ahead of last year’s leader, Amazon S3, to take the top spot. Across three primary tests (performance, scalability and stability), Microsoft emerged as a top performer in every category” – Nasuni 2013 Cloud Storage Report

The new prices will be applied from March 13th 2014.

 

Recent upgrades on azure storage layers has shown great improvements.

azure-perf

 

A pricing comparison:

Azure IaaS Disks are $0.095/GB-month with Geo Redundancy. With AWS in order to get high durability of VM disks, customers have to pay the price of both EBS Standard Volumes ($0.05/GB-month) and EBS Snapshot to S3 ($0.095/GB-month), which is 34% more expensive.

You can read the report HERE

 

You can watch the report HERE

 

Another study of Cloud SAYS

Though Amazon EC2 has the lowest price advantage over the other 4 providers at $0.12 per hour (now tied with Windows Azure), the lowest cost does not always mean the best value for a customer looking for maximizedperformance.Read More

 

To get Azure pricing HERE

Leave a comment »

Simplest way of compressing files using .net 4.5

 

We will try to create a sample project to demonstrate compression using .net 4.5

  1. Create a sample Console project
  2. Right click references > Add References

    Search and add the DLL “System.IO.Compression.FileSystem”

     

  3. Now let’s add a sample folder called “test” and add 2 sample text files in it.

  4. Now select the 2 sample test files and change the property “Copy to output directory” to “copy if newer”

    This will create the test folder and copy both the test files in to it in the debug folder which is the working folder for the program.

     

  5. Now you can use the following code in the Main program to compress the “test” folder

     

    Syntax (C#)

    public static void CreateFromDirectory(

        string sourceDirectoryName,

        string destinationArchiveFileName

    )

    sourceDirectoryName: The path to the directory to be archived, specified as a relative or absolute path. A relative path is interpreted as relative to the current working directory.

    destinationArchiveFileName: The path of the archive to be created, specified as a relative or absolute path. A relative path is interpreted as relative to the current working directory.

     

     

     

    System.IO.Compression.ZipFile.CreateFromDirectory(“test”, “testzip.zip”);

     

    Output in Debug folder:

     

    Applying this in Windows Azure:

    Storing in the local file system won’t be applicable in azure environment. We need to store in a centralized location where all servers can access them.

    You can upload your files into a blob container and compress those files using a worker role & store them in another blob container.

     

    Moreover, there are plenty of other libraries that we use to compress. All has its pros and cons.

     

    Download Code Sample from HERE

Leave a comment »

Asp.net Web API 102 – Scaffolding

In the previous blog we discussed the Basics of Web API and try to understand the default code. Now we will try to create our custom Web API using a Case Study.

Case study:

We will build our own requirement.

  • We have a Public Product Catalog
  • Any organization or an individual can publish their products here
  • We will add more requirements when we move forward…

Step 1: Create a Model “Product”


(Note: I have put full namespaces so you can better understand where it came from.)

I have added some data annotations to the properties.

  • ID will be the Key property.
  • Name property is marked as required.
  • And the maximum length of the Description will be 500 characters.

Now let’s use Scaffolding to create the Controller using the Model “Product”.

Step 2: Select Scaffolding for Web API 2 with Entity Framework

Right Click Controller Folder à Add à Select “New Scaffolded Item”


Choose the following option in the Web API category:


Step 3: Populate Controller Details for Scaffolding

  • Give a Controller Name
  • You have the option to create Controllers with Asynchronous Controller Actions or without.
  • Select the Model Class “Product”
  • Create a New Data Context to work with Databases.


Error 1:

When scaffolding the controller from the Model following error may appear:

“There was an error generating ‘WebAPI1.Models.DBContext. Try rebuilding your project.”

Rebuilding the project would solve this error.

Error 2:

If the Product class doesn’t have a key property, when scaffolding following error may appear. Example: if ID property doesn’t have the Data Annotation Key.


Note: Make sure you rebuild the project after you add the Key property. Also rebuild after any changes in the Model, before Scaffolding.

Now you have Created the your custom API Controller “ProductController”.

In the next blog we will see the differences of Actions with and without Asynchronous behavior.


Leave a comment »

Asp.net Web API 101 – Basics

We will discuss some points and try to find “Why Web Api?”

  • Earlier we had following web services:

    • Soap services – supports only xml

    • WCF services – supports many data formats, Protocols: HTTP, TCP, UDP, and custom transports

      • For a comparison between SOAP & WCF: Here

      • For a comparison between WCF & Web Api: Here

  • Now apps are available in PCs, Mobiles, Tablets, Notebooks, electronic devices, etc. Not all these devices speak SOAP. But they do speak HTTP. When you have more clients, you need to scale. Web API tries to minimize unnecessary configurations and keeps it simple.
  • It is simply a framework that allows you to build HTTP services. Services that are activated by simple GET requests, or by passing plain XML over POST, and respond with non-SOAP content such as plain XML, a JSON string, or any other content that can be used by the consumer.
  • WCF was initially created to support SOAP + WS-* over a variety of transports. Not all platforms and devices support soap. There was need to non-SOAP services. WCF 3.5 added WebHttpBinding – a new binding to create non-SOAP service over HTTP, better known as a RESTful service. Although, WCF support increased over time, it came with some pain. So, the main goal of the Web APIs is to stop looking at HTTP through the eyes of WCF, and just use as a transport protocol to pass requests. Web API aims to properly use URIs, HTTP headers, and body to create HTTP services accessible by any devices and platforms available.

To Start a Web Api Project using Visual Studio 2013

  • Create a New ASP.NET Web Application


  • Select Web Api template


The project creates a sample Web API controller named “ValuesController”. It will be inherited from the ApiController class which defines properties and methods for API controller.

Since Web API is a http service, each controller will by default have HTTP METHODS.

  • GET
  • POST
  • PUT
  • DELETE

Exercise 1:

http://localhost:16107/api/values

This URL will call the default Method “Get”

    // GET api/values

        public IEnumerable<string> Get()

        {

            return new string[] { “value1”“value2” };

        }

This will return a string array. Example: we can use this to retrieve a product catalog.

Exercise 2:

http://localhost:16107/api/values/001

This URL will call the default Method “Get”, now passing a parameter id.

        // GET api/values/5

        public string Get(int id)

        {

            return “value”;

        }

Example: we can use this to retrieve product by passing a product id.

Exercise 3: POST simple Json string using fiddler

We can use Fiddler, to send a POST requests.




FromBody attribute:

This forces Web API to read a simple type from the request body. However, only one argument in the action can be decorated with this attribute, and you get an exception when you try to decorate more than one. Without FromBody attribute it will return an error “HTTP/1.1 405 Method Not Allowed”. More

Exercise 4: POST Json Object using fiddler

Post Body = { “ProductName”=”Apple”, “Price”=”22”}


Exercise 5: POST Json Object as a Model using fiddler

Post Body = { “ProductName”=”Apple”, “Price”=”22”}



Exercise 6: POST Form data using fiddler

Change the content type

  • Content-type: application/x-www-form-urlencoded

Change Request body

  • Name=’Apple’&Price=’22’


Exercise 7: PUT (update) Json string using fiddler



Exercise 8: DELETE using fiddler

Note: body part is not applicable for DELETE.



Summary

Now, we have a basic idea about Web API, how to start a project, understanding existing sample code when a Web API project is created, and how to call HTTP methods of Api Controller.

In the Next Blog we will see how can we can work with custom methods on API Controller.

References:

http://www.w3schools.com/tags/ref_httpmethods.asp

http://blogs.microsoft.co.il/idof/2012/03/05/wcf-or-aspnet-web-apis-my-two-cents-on-the-subject/

Leave a comment »

Understanding Windows Azure Tables comparing with Relational SQL Tables (on-premise/SQL Azure)

Table of Contents

  • Background
  • A comparison of Relational SQL Table and Azure Storage Table
  • Azure Storage Table Concept
  • Understanding Entity
  • Understanding Properties by example
  • Partitions
  • Key things to remember

Background

Windows Azure Storage is a storage option provide by Windows Azure. Using this technology will bring all the benefits of Windows Azure such as High Availability, Scalability, Security, etc. Azure tables were specially designed to store non-relational data that needs some structure, but highly scalable. All the values of Azure tables are stored as Key-Value pairs which makes it a highly scalable storage option.

When someone starts to learn about windows azure table storage, the first thing that will come to his mind will be “Why do we need this when we have Sophisticated Relational SQL Server Tables?”

Therefore, the best way to understand about Windows Azure Tables will be, start thinking about a relational SQL table and learn the differences.

A comparison of Relational SQL Table and Azure Storage Table

Relation SQL table Azure Storage table
Can have Relationships between tables There are no relationships between tables. Follows NoSQL Concepts
Has Rows of values Considered as Entities
Strictly defined Columns & enforces data integrity There is no Concept of Columns, instead values are stored as key-value pairs
Each row has fixed amount of columns Each entity can have Zero or more properties
Each data/value will have a data type which corresponds to the column definition Each data/value are independent & can have different data types
No build-in System columns 3 System properties for each entity (PartitionKey, RowKey, Timestamp)
Merging data from multiple sources that has different formats – may need to be converted programmatically Have the option of saving data in a different formats.Example:Source 1: 01/01/2001Source 2: “2001” (only the year as a String)
Ability to Query – TSQL Limited Query options
Can apply indexes to any column Can’t do that. Is only available for PartitionKey & RowKey built-in
Geographically redundant price (as of 2014-Mar-09) – $9.99 Upto 1GB per Month Cheaper – $0.095 per GB per Month
When comparing only on-premise SQL More Availability
SQL Azure Max Capacity (as of 2014-Mar-09) – 150 GB 200TB

Azure Storage Table Concept

In an Azure storage table a collection of Properties (key-values pairs) will form an Entity. Those entities will be meaningfully grouped in to partitions using PartitionKeys. Ultimately, the collection of entities that were grouped in to partitions will form the Azure Storage Table.

Understanding Entity

An entity has 3 system properties:

  1. PartitionKey – means of partitioning entities for load balancing with the rule of thumb “Data that need to be Queried together, must be kept together”
  2. RowKey – This is a key that helps to uniquely identify an entity that belongs to a partition. So, RowKey together with PartitionKey will make an Entity unique in a given azure table.
  3. TimeStamp – When you first create an entity, the Date & Time of creation will be recorded in this property. And later if you do any change to that entity, the value will get updated with the last updated date & time.

Those system properties are built-in indexed and it is recommended to use these properties as identity columns so it is efficient. Non-system values cannot be indexed.

The paritionKey together with RowKey will form unique entities:


Understanding Properties by example

Example 1: No fixed Schema for storing values

In a relation SQL table all those values that is entered under column “Age” will be bound to the limitations defined in the column definition/design such as Data type, max length, unique key, etc.

Now let’s try to store those values in an Azure Storage Table. Now, those values are not bound to a particular column. (There is no concept of columns)

Each and every values are stored as key-value pairs. Unlike in relational SQL tables, those values can be of different data types as well.

This design has enable some new scenarios that were not possible before.

For example, let’s say we store date values in relational SQL table we have to store them in a standard format. But in SQL Azure table, we can store the date as string for some entities if needed.

Example 2:

There is a requirement to create a simple consolidated search engine of products that are referred from two different ecommerce stores. Store 1 stores Product Expiry Dates as DateTime data type (1900-01-01 00:00:00). On the other hand, Store 2 stores only Year and Month, but as Strings (“1900/June”). If we are doing this using relational SQL, we will have to assume a default date for Store 2 (may be logically incorrect according to the product type) and programmatically convert each and every date stored either as a separate column or do it when querying. But, we can push products from both the stores to Azure tables without any of these effort. (Store 1 values stored as DateTime, Store 2 values stored as String)

Moreover, if we take a particular entity, it can have Zero or more properties.

Example 3

In the above example,

  • First entity will have Zero properties
  • Third entity will have 3 properties (“First”,”Last”,”Birthdate”)
  • Second entity will have 4 properties (“First”,”Last”,”Birthdate”,”Sport”); the property “Sport” only belongs to the second entity and will not be applicable for other entities unless other entities have explicitly defined a property with the same name.
  • PartitionKey together with RowKey will form unique entity in Azure storage table. Although first & second entities have the RowKey “001” since they have different paritionKeys they will be unique inside the table.

Properties supports following data types:

  • Binary
  • Bool
  • DateTime
  • Double
  • GUID
  • Int32
  • Int64
  • String

Partitions

Azure storage tables are partitioned by the system property called “PartitionKey”.

Entities that were partitioned with the same ParitionKey will belong to the same load balancing group. On other words, anything within the same partition will live in the same server (same storage machine), so they can be queried/accessed faster.

So, as a rule of thumb when we design for Azure Storage tables we need to remember that “Data that need to be queried together, must be kept together in the same partition”.

Since, we cannot index non-system properties, the key idea of azure table is that design it in a way so that we make use of the concept of partitioning for load balancing. So, the art of choosing how to partition entities will depend on the requirements, transactions, and mainly on what kind of Queries that we are going to use on the azure table.

Following are some examples of how developers have used azure tables and how they have paritioned according to different requirements:

In this part of the table, we can we the entities have been partitioned using their document name.

The different versions of the same document are stored using different RowKeys.

In this case, all the versions of a particular document will be stored together. So, retrieving the versions of the same document will be very fast.

As you can see ParitionKey or RowKey doesn’t have to be a GUID or Integer they can be any data type. In the following example, Name is considered as the RowKey.

Following is a good example that describes how independent that values are stored for each entity as properties.

Key things to remember

In summary, when we design for azure storage tables with the concept of partitions, following are some key concerns:

  • First of all, know what you cannot do with Azure storage tables. It is not a replacement for relational SQL tables. In different cases, it can reside independently or it can work together with relational SQL tables to form a system. The latter case is the obvious.
  • Know the transactions; it is important to know what are the entities that are going to be updated together.
  • Know your Queries. It is important to know what are the entities that are going to be queried/accessed together.
  • Therefore, depending on the above concerns, we have to partition data.
  • Sometimes, if we design our tables to server some key queries, it may not possibly cater some other queries, which we will have to look for workarounds. In Relational SQL we have really good guides and best practices. In contrast, there is less direct guidance on designing such NoSQL based tables. So, it’s more of an Art!
Leave a comment »

How can store json data in azure tables?

There is no direct Art for designing noSQL schema. It basically will depend on the requirement, mainly on how we are going to use it and query it. You can store the Json file in a blob as .js file if you just wan’t access it programatically. Or else if you want to make use of the azure table features and query capabilities, then you may try to convert json in to azure properties. So, as said it depends on how you want to use it and the limitations of azure tables.

Leave a comment »

Questions from the Windows Azure Storage – Session 01 – Sri Lanka .NET Forum

I must say we had really good participants who were really interested and keen on learning new technologies. There are some of the interesting questions that they asked.

Question: Is SQL Azure and Azure Storage are same?

Reply: Not Really. SQL Azure is simply the Cloud version of the on-premise SQL Server but with some limitations.

Can we use Visual Stuido 2012 to create azure applications?

Yes you can. But, azure releases new feature frequently. so most of the visual studio tools will be included with the latest Visual Studio versions. So, its always better to use the latest.

Question: How can store json data in azure tables?

Reply: There is no direct Art for designing noSQL schema. It basically will depend on the requirement, mainly on how we are going to use it and query it. You can store the Json file in a blob as .js file if you just wan’t access it programatically. Or else if you want to make use of the azure table features and query capabilities, then you may try to convert json in to azure properties. So, as said it depends on how you want to use it and the limitations of azure tables.

Answers for Other Questions:

https://codecloudy.wordpress.com/2014/03/07/does-azure-tables-internally-use-relational-sql-to-store-its-values/

https://codecloudy.wordpress.com/2014/03/07/can-we-have-indexes-for-properties-other-than-the-system-properties-partitionkey-rowkey/

Some of the Questions were really interesting. And i will discuss some of those in details in my future forum Sessions.

Leave a comment »

Can we have indexes for properties other than the system properties PartitionKey & RowKey?

No. You cannot have indexes for non-system properties. The key idea of azure table is that design it in a way so that we make use of the concept of partitioning for load balancing. So, the art of choosing how to partition entities will depend on the requirements, transactions, and mainly on what kind of Queries that we are going to use on the azure table.

System properties

  1. PartitionKey – means of partitioning entities for load balancing with the rule of thumb “Data that need to be Queried together, must be kept together”
  2. RowKey – This is a key that helps to uniquely identify an entity that belongs to a partition. So, RowKey together with PartitionKey will make an Entity unique in a given azure table.
  3. TimeStamp – When you first create an entity, the Date & Time of creation will be recorded in this property. And later if you do any change to that entity, the value will get updated with the last updated date & time.

 

uniqueKeyOfEntity

 

Leave a comment »

Does Azure tables internally use relational SQL to store its values?

Storage Emulator which comes with the Azure SDK, will simulate the Azure Cloud Storage environment in the local Development environment. And it uses sqlExpress/localdb with NTFS files systems to simulate this environment. It creates this resources the first time when we run a Cloud Project using visual studio.

StorageEmulatorArchitecture

But this is only for the simulation by the Storage Emulator.

Note: In earlier azure versions after we start the storage emulator some temporary databases will be created in the sqlExpress. With new versions of SQL tools sqlExpress doesn’t run as service anymore. I couldn’t see the databases using Visual Studio Server Explorer. But, i found those temporary database files (mdf & log) in the user profile folder (C:\User\UserName\).

DevelopmentStorageDB

The actual Azure storage will store data in a completely different way without using SQL Server architecture.

BasicAzureStorageInternalArchitecture

this is a very basic view of the internal architecture. So, we have Front Ends which will authenticate & authorize incoming requests and then will route those requests to partition layer. Partition servers in the partition layer will manage all the partitions in the server.

Leave a comment »