First things first, you need to have a LinkedIn account and LinkedIn company page to post shares, as they are called on LinkedIn. If you don’t have an account or a company page, you can create one at LinkedIn. Once you have an account, you will need to create an application in the LinkedIn Developer Portal.
On the Developer Portal, you will need to create an application. You can do this by clicking on the Create App
button in the center on of the page. You will need to fill out the form with the following information:
Field | Description |
---|---|
App name * | The name of your application |
LinkedIn Page * | This is the owner (company page) of the application |
Privacy policy URL | The URL to your privacy policy |
App logo * | The logo for your application |
Legal agreement | Read and accept that API Terms of Use |
(*) Required
Once done, you should see a screen similar to the following:
Once you have an application, you will need to get the clientid
and clientsecret
for the application. You will need these to authenticate with the LinkedIn API. If you are going to be posting on behalf of a user, you will need to get an OAuth token for the user. If you are going to be posting on behalf of a company, you will need to get an OAuth token for the company. I won’t be covering the organizational OAuth token in this post.
Navigate to the OAuth 2.0 Tools page to begin. You can also get there from you App Information page by clicking on Docs and tools, then OAuth Token Tools.
On the next page, you will need to select your application and either chose Member authorization code (3-legged) or Client credential (2-legged). I didn’t come up with the names. Basically, Member authorization is if you are going to be posting on behalf of the user, as I will in this post, and Client credential is if you are going to be building an application or interface where the user will be signing in and doing something with their data. For this post, I will be using Member authorization code (3-legged).
For more on the different types of OAuth 2.0 flows, see LinkedIn Authentication Overview.
You will need to select the scopes that you will need for your application. For this post, I will be using the following scopes:
r_liteprofile
: Read basic profile informationw_member_social
: Post, comment and like posts on behalf of the userClick Request access token to continue.
Once you do this, you will be redirected to sign-in to LinkedIn to grant the access. Enter your credentials and click Sign In.
Now, you will have a screen with the access token and details about the token. You will need this access token to make calls to the LinkedIn API. You can copy the token by clicking on the Copy access token button.
I strongly recommend that you save this access token somewhere secure like Azure Key Vault. If someone gets a hold of it, they can post on behalf of the user. If you do lose it, you can always revoke it and create a new one.
You’ll notice that the access token expires in approximately two months. You will need to refresh the token before it expires. You will need to do this before the token expires. Details on how to do this are in the LinkedIn Authentication Overview. I won’t be covering that in this post.
Now that we have the access token, we can start posting to LinkedIn. I created a class called LinkedInManager
to handle all of the calls to the LinkedIn API. You check get the source in the LinkedIn API Manager GitHub Repository. I will be using the LinkedIn API to get the user’s LinkedIn ID and the Share on LinkedIn API to post the share.
All of the API calls to post a share on LinkedIn require the user’s LinkedIn ID. You can get this by making a call to the Profile API. You will need to make a GET
call to the me
endpoint, as shown below as an http request.
1
2
GET https://api.linkedin.com/v2/me
Authorization: Bearer {{my-access-token}}
In this http request, as well as the future requests in this posts, you will need to replace {{my-access-token}}
with the access token you received from the OAuth 2.0 Tools page. There will likely be another variable in future https requests, {{my-person-id}}
. This will be the LinkedIn ID you get from the me
endpoint.
In the LinkedInManager
class, I created a method called GetUser
to make this call. The method is shown below.
1
2
3
4
5
6
7
8
9
public async Task<LinkedInUser> GetMyLinkedInUserProfile(string accessToken)
{
if (string.IsNullOrEmpty(accessToken))
{
throw new ArgumentNullException(nameof(accessToken));
}
return await ExecuteGetAsync<LinkedInUser>(LinkedInUserUrl, accessToken);
}
The ExecuteGetAsync
method is a helper method that makes the call to the LinkedIn API. The method is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
private async Task<T> ExecuteGetAsync<T>(string url, string accessToken)
{
_httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
_httpClient.DefaultRequestHeaders.Add ("Authorization", $"Bearer {accessToken}");
var response = await _httpClient.GetAsync(url);
if (response.StatusCode != HttpStatusCode.OK)
throw new HttpRequestException(
$"Invalid status code in the HttpResponseMessage: {response.StatusCode}.");
// Parse the Results
var content = await response.Content.ReadAsStringAsync();
var options = new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true,
};
var results = JsonSerializer.Deserialize<T>(content, options);
if (results == null)
{
throw new HttpRequestException(
$"Unable to deserialize the response from the HttpResponseMessage: {content}.");
}
return results;
}
You’ll notice that I am using the HttpClient
class to make the call to the LinkedIn API. I am also using the System.Text.Json
library to deserialize the response from the API call. You can use any library you want to make the call and deserialize the response.
On line 3 and 4, the required headers are set. The Accept
header is set to application/json
and the Authorization
header is set to the access token.
Line 5 makes the call to the LinkedIn API. If the response is not OK
, an exception is thrown.
Line 11 reads the response from the HTTP call.
Line 18 deserializes the response from the API call. If the response is null
, an exception is thrown.
Upon success, the LinkedInUser
object is returned. The LinkedInUser
object is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class LinkedInUser
{
[JsonPropertyName("id")]
public string Id { get; set; }
[JsonPropertyName("profilePicture")]
public ProfilePicture ProfilePicture { get; set; }
[JsonPropertyName("vanityName")]
public string VanityName { get; set; }
[JsonPropertyName("localizedFirstName")]
public string FirstName { get; set; }
[JsonPropertyName("localizedLastName")]
public string LastName { get; set; }
[JsonPropertyName("localizedHeadline")]
public string Headline { get; set; }
[JsonPropertyName("firstName")]
public LocalizedInformation LocalizedFirstName { get; set; }
[JsonPropertyName("lastName")]
public LocalizedInformation LocalizedLastName { get; set; }
[JsonPropertyName("headline")]
public LocalizedInformation LocalizedHeadline { get; set; }
}
Note: Not all of these fields will be filled. What is filled is based on the scope of for your OAuth token. The Id
property is the only property that we use. Note, if you have the r_liteprofile
scope, you will get all of the properties but VanityName
.
The Id
property of the LinkedInUser
object, as well as other identifiers in the API, follows the Universal Resource Name URN internet standard. The format is urn:li:person:<person-id>
, where <person-id>
is the identifier for the person. For most calls, you will need the full URN. In the LinkedInManager
class, you only need the <person-id>
portion of the URN.
For more on the URNs and IDs in LinkedIn, see the LinkedIn URNs and IDs page.
The Share API on LinkedIn provides the different ways to create shares or posts on LinkedIn; plain text, text with a link, and text with an image. I will show you how to create each of these types of shares.
To post plain text, you will need to make a POST
call to the ugcPosts
endpoint. The call to the ugcPosts
endpoint is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
POST https://api.linkedin.com/v2/ugcPosts
Authorization: Bearer {{my-access-token}}
X-Restli-Protocol-Version: 2.0.0
{
"author": "urn:li:person:{{my-person-id}}",
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"shareCommentary": {
"text": "Please ignore this post. This is a test post. It will be deleted shortly.}"
},
"shareMediaCategory": "NONE"
}
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
A few things to note about this call. We introduced a new header X-Restli-Protocol-Version
. This header is required for all POST
calls. The value of the header is 2.0.0
.
In the LinkedIn API Manager, as call is made to PostShareText
. The method is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public async Task<string> PostShareText(string accessToken, string authorId, string postText)
{
// Validation removed for brevity
var shareRequest = new ShareRequest
{
Author = string.Format(LinkedInAuthorUrn, authorId),
Visibility = new Visibility { VisibilityEnum = VisibilityEnum.Anyone },
SpecificContent = new SpecificContent
{
ShareContent = new ShareContent
{
ShareCommentary = new TextProperties()
{
Text = postText
},
ShareMediaCategoryEnum = ShareMediaCategoryEnum.None
}
}
};
var linkedInResponse = await CallPostShareUrl(accessToken, shareRequest);
if (linkedInResponse is { IsSuccess: true, Id: not null })
{
return linkedInResponse.Id;
}
throw new HttpRequestException($"Failed to post status update to LinkedIn: LinkedIn Status Code: '{linkedInResponse.ServiceErrorCode}', LinkedIn Message: '{linkedInResponse.Message}'");
}
We start off by creating a ShareRequest
object. Now, in my implementation I have made a view assumption like setting the Visibility
to Anyone
. You might want to change that in your implementation. Similar to the GetMyLinkedInProfile
call, I use a helper method for all of the POST
calls. This was helpful for a few reasons, the first, I was repeated code, but more importantly, the serialization and deserialization of the request and response for the LinkedIn API was very particular. The helper method is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
private async Task<ShareResponse> CallPostShareUrl(string accessToken, ShareRequest shareRequest)
{
// Validation removed for brevity
HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Post, LinkedInPostUrl);
requestMessage.Headers.Add("Authorization", $"Bearer {accessToken}");
requestMessage.Headers.Add ("X-Restli-Protocol-Version", "2.0.0");
JsonSerializerOptions jsonSerializationOptions = new(JsonSerializerDefaults.Web)
{
WriteIndented = false,
DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
};
var jsonRequest = JsonSerializer.Serialize(shareRequest, jsonSerializationOptions);
var jsonContent = new StringContent(jsonRequest, null, "application/json");
requestMessage.Content = jsonContent;
var response = await _httpClient.SendAsync(requestMessage);
var content = await response.Content.ReadAsStringAsync();
var options = new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true
};
var linkedInResponse = JsonSerializer.Deserialize<ShareResponse>(content, options);
if (linkedInResponse == null)
{
// TODO: Custom Exception
throw new HttpRequestException(
$"Unable to deserialize the response from the HttpResponseMessage: {content}.");
}
return linkedInResponse;
}
Lines 5-7, I prepare the http call and headers.
Line 9 - 16, I send the JsonSerializationOptions
. Take note of the DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
setting, without this calls to the LinkedIn API will fail with either a 400
or 422
error.
Line 18, I send the request.
Line 20 -25, I deserialize the response, and if successful, return the ShareResponse
object.
The ShareResponse
object, shown below, is used to deserialize the response from the LinkedIn API. Only the Id
property is provided upon a successful call to the LinkedIn API. If the call fails, the Message
, ServiceErrorCode
, and Status
properties will be populated.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class ShareResponse
{
[JsonPropertyName("message")]
public string? Message { get; set; }
[JsonPropertyName("serviceErrorCode")]
public int? ServiceErrorCode { get; set; }
[JsonPropertyName("status")]
public int? Status { get; set; }
[JsonPropertyName("id")]
public string? Id { get; set; }
public bool IsSuccess => !string.IsNullOrEmpty(Id);
}
You can do with the Id
property as you like, this is LinkedIn’s unique identifier for the post.
The post text with link API is very similar to the post text API. The only difference is the addition of media
object in the com.linkedin.ugc.ShareContent
section of request. Here is the sample request.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
POST https://api.linkedin.com/v2/ugcPosts
Authorization: Bearer {{my-access-token}}
X-Restli-Protocol-Version: 2.0.0
{
"author": "urn:li:person:{{my-person-id}}",
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"shareCommentary": {
"text": "LinkedIn has an AI Assisted Editor for posting articles."
},
"shareMediaCategory": "ARTICLE",
"media": [
{
"status": "READY",
"description": {
"text": "This is the description of the media."
},
"originalUrl": "https://www.josephguadagno.net/",
"title": {
"text": "Joseph Guadagno Website"
}
}
]
}
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
In the LinkedIn API Manager, a call is made to the PostTextWithLink
method. The code is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
public async Task<string> PostShareTextAndLink(string accessToken, string authorId, string postText, string link, string? linkTitle = null, string? linkDescription = null)
{
// Validation removed for brevity
var shareRequest = new ShareRequest
{
Author = string.Format(LinkedInAuthorUrn, authorId),
Visibility = new Visibility { VisibilityEnum = VisibilityEnum.Anyone },
SpecificContent = new SpecificContent
{
ShareContent = new ShareContent
{
ShareCommentary = new TextProperties()
{
Text = postText
},
ShareMediaCategoryEnum = ShareMediaCategoryEnum.Article
}
}
};
var media = new Media{OriginalUrl = link};
if (!string.IsNullOrEmpty(linkDescription))
{
media.Description = new TextProperties {Text = linkDescription};
}
if (!string.IsNullOrEmpty(linkTitle))
{
media.Title = new TextProperties {Text = linkTitle};
}
shareRequest.SpecificContent.ShareContent.Media = new[] { media };
var linkedInResponse = await CallPostShareUrl(accessToken, shareRequest);
if (linkedInResponse is { IsSuccess: true, Id: not null })
{
return linkedInResponse.Id;
}
throw new HttpRequestException(BuildLinkedInResponseErrorMessage(linkedInResponse));
}
You’ll notice that most of the start of the code looks the same as the PostShareText
method. However, on line 17 we see the ShareMediaCategoryEnum
field to ShareMediaCategoryEnum.Article
which creates a “Article”, a link with a posts.
On lines 21-30 we construct the Media
object and add it to the ShareContent
object.
Line 32, we call the CallPostShareUrl
method and return the Id
property if successful.
The post text with image API is very similar to the post text with link API. The only different is that we have to upload the image to LinkedIn. This is a three step process. Step one, is for us to notify LinkedIn that we want to add file. I know, it’s weird but I didn’t write the API, luckily, it’s all wrapped in the LinkedIn API Manager. Step two, is to upload the file to LinkedIn. Followed by step three, which is to post the share with the image.
In order to upload an image to LinkedIn, we need to notify LinkedIn that we want to upload a file. This is done by calling the asset
API and using the registerUpload
method. Here is the sample request.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
POST https://api.linkedin.com/v2/assets?action=registerUpload
Authorization: Bearer {{my-access-token}}
X-Restli-Protocol-Version: 2.0.0
{
"registerUploadRequest": {
"recipes": [
"urn:li:digitalmediaRecipe:feedshare-image"
],
"owner": "urn:li:person:{{my-person-id}}",
"serviceRelationships": [
{
"relationshipType": "OWNER",
"identifier": "urn:li:userGeneratedContent"
}
]
}
}
The only dynamic part of this request is the owner
field. This is the LinkedIn URN of the person who is uploading the image. Upon success, you will receive a response back similar to this
1
2
3
4
5
6
7
8
9
10
11
12
{
"value": {
"uploadMechanism": {
"com.linkedin.digitalmedia.uploading.MediaUploadHttpRequest": {
"headers": {},
"uploadUrl": "https://api.linkedin.com/mediaUpload/<LinkedInAssetId>/feedshare-uploadedImage/0?ca=vector_feedshare&cn=uploads&m=AQJbrN86Zm265gAAAWemyz2pxPSgONtBiZdchrgG872QltnfYjnMdb2j3A&app=1953784&sync=0&v=beta&ut=2H-IhpbfXrRow1"
}
},
"mediaArtifact": "urn:li:digitalmediaMediaArtifact:(urn:li:digitalmediaAsset:<LinkedInAssetId>,urn:li:digitalmediaMediaArtifactClass:feedshare-uploadedImage)",
"asset": "urn:li:digitalmediaAsset:<LinkedInAssetId>"
}
}
<LinkedInAssetId>
is a unique Asset Id that LinkedIn assigned. I replaced a real asset id in this response with <LinkedInAssetId>
so no one deletes the object. :smile:
For the next step, we are going to need the uploadUrl
from the response. We will use this to upload the image to LinkedIn. Will we also need the asset
field for when we create the post on LinkedIn. Again, this is all wrapped up in the LinkedIn API Manager.
For this, I don’t have an http request to show you, if you want to play around with an http client you will need to use something like cUrl
to upload the image. Sample cUrl
command is shown below.
1
curl -i --upload-file {{path-to-image}} --header "Authorization: Bearer {{my-access-token}}" '{{uploadUrl}}'
Replace the following tokens
Token | Value |
---|---|
{{path-to-image}} |
The path to the image you want to upload |
{{my-access-token}} |
Your LinkedIn access token |
{{uploadUrl}} |
The uploadUrl from the response of the registerUpload API call |
For the http request, it is almost the same as the post text and link. We only change the ShareMediaCategoryEnum
to ShareMediaCategoryEnum.Image
. Here is the sample request.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
POST https://api.linkedin.com/v2/ugcPosts
Authorization: Bearer {{my-access-token}}
X-Restli-Protocol-Version: 2.0.0
{
"author": "urn:li:person:{{my-person-id}}",
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"shareCommentary": {
"text": "LinkedIn has an AI Assisted Editor for posting articles."
},
"shareMediaCategory": "IMAGE",
"media": [
{
"status": "READY",
"description": {
"text": "LinkedIn has an AI Assisted Editor for posting articles."
},
"media": "urn:li:digitalmediaAsset:D5622AQHqpGB5YNqcvg",
"originalUrl": "https://www.josephguadagno.net/2023/08/08/linkedin-now-has-an-ai-assisted-editor-for-post",
"title": {
"text": "LinkedIn has an AI Assisted Editor for Post"
}
}
]
}
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
Mostly everything is the same as the previous call, however, here we set the media
to the asset
field we received when we made the upload request.
In the LinkedIn API Manager, the call looks like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
public async Task<string> PostShareTextAndImage(string accessToken, string authorId, string postText, byte[] image, string? imageTitle = null, string? imageDescription = null)
{
// Validation removed for brevity
// Call the Register Image endpoint to get the Asset URN
var uploadResponse = await GetUploadResponse(accessToken, authorId);
// Upload the image
var uploadUrl = uploadResponse.Value.UploadMechanism.MediaUploadHttpRequest.UploadUrl;
var wasFileUploadSuccessful = await UploadImage(accessToken, uploadUrl, image);
if (!wasFileUploadSuccessful)
{
throw new ApplicationException("Failed to upload image to LinkedIn");
}
// Send the image via PostShare
var shareRequest = new ShareRequest
{
Author = string.Format(LinkedInAuthorUrn, authorId),
Visibility = new Visibility { VisibilityEnum = VisibilityEnum.Anyone },
SpecificContent = new SpecificContent
{
ShareContent = new ShareContent
{
ShareCommentary = new TextProperties()
{
Text = postText
},
ShareMediaCategoryEnum = ShareMediaCategoryEnum.Image
}
}
};
var media = new Media{MediaUrn = uploadResponse.Value.Asset};
if (!string.IsNullOrEmpty(imageDescription))
{
media.Description = new TextProperties {Text = imageDescription};
}
if (!string.IsNullOrEmpty(imageTitle))
{
media.Title = new TextProperties {Text = imageTitle};
}
shareRequest.SpecificContent.ShareContent.Media = new[] { media };
var linkedInResponse = await CallPostShareUrl(accessToken, shareRequest);
if (linkedInResponse is { IsSuccess: true, Id: not null })
{
return linkedInResponse.Id;
}
throw new HttpRequestException(BuildLinkedInResponseErrorMessage(linkedInResponse));
}
On line 6, the request is made to upload the file via the GetUploadResponse
method, which returns a UploadRegistrationResponse
upon success, otherwise it throws an exception.
On lines 9 and 10 we upload the file via the UploadImage
method, which returns a true
upon success, otherwise a failure.
The rest of code is the same as the previous example.
You can view this methods in the LinkedInApiManager.cs file.
That was a lot to take in. We covered how to post text, links, and images to LinkedIn. We also covered how to get the access token. We also covered how to get the user’s profile information. I hope you found this useful as well as the break down of calls required to call the API.
If you have any questions, please feel free to reach out to me on Twitter, LinkedIn, or via the contact.
Rich and I will work together to learn how to use Kubernetes. Here is the multi-day agenda:
At the moment, my company page on LinkedIn does not have enough followers so I cannot broadcast there yet.
]]>So I asked it to help me write a post about the API testing I was doing. I gave it this text:
I want to mention to my connections that I will be testing the LinkedIn APIs and they should ignore the post for a while and apologize for the noise it caused
It came up with the following:
Attention all my connections! I will be testing the LinkedIn APIs shortly. Please excuse any noise this may cause and kindly ignore the post for a while. Thank you for your understanding!
Not bad, the only draw back right now is that it requires a minimum of 30 words to be used. I probably could have written the post with less words but I wanted to try it out.
Once it generated the text, it displayed it in the same dialog that we are used to seeing but with a “thumbs up” and “thumbs down” feedback section. I clicked the “thumbs up” button since the text was pretty good.
When I clicked the Post button, I received a warning dialog box because I did not edit the text which is a good feature to ensure that you are not posting something that you did not intend to post.
I clicked Post since I was fine with the text.
One thing to note, is that, again at the time I am writing this post, it is in Beta and might not be available to everyone.
]]>According to the Development Containers website, developer containers are:
A Development Container (or Dev Container for short) allows you to use a container as a full-featured development environment. It can be used to run an application, to separate tools, libraries, or run times needed for working with a codebase, and to aid in continuous integration and testing. Development Containers can be run locally or remotely, in a private or public cloud.
Ok, so basically, a development container uses Docker behind the scenes to create your development environment, so you don’t have to. Because I do a lot of presentations and demos using different languages and frameworks, it will allow me to have a cleaner machine, plus be able to work on multiple projects at the same time without having to worry about if I have the correct version of a language or framework installed.
You need three things to get started with development containers:
More on getting started can be found here.
The Developer Container is described by a file devcontainer.json
which sits in a folder called .devcontainer
in the root of your project.
You can create one in Visual Studio Code
by using the command Dev Containers: Add Dev Container Configuration Files...
from the Command Palette
(Ctrl+Shift+P
).
NOTE: Previous versions of this extension were called Remote Containers.
You will be presented with a list of options to choose from:
Option | Description |
---|---|
From a predefined container configuration definition… | Use a base configuration from the container definition registry |
From ‘dockerfile’ | Refer to the existing ‘dockerfile’ in the container configuration |
From ‘docker-compose’ | Refer to the existing ‘docker-compose.yml’ in the container configuration |
Learn More | Documentation on predefined container definitions |
I chose the From a predefined container configuration definition...
option and was presented with a list of options to choose from:
There are a lot of templates to chose from,
since I was working on my blog which is a Jekyll site,
I selected the *Show All Definitions...*
option and chose Jekyll
from the list.
You can see the full list of templates here.
Depending on which container template you choose, you may be presented with additional options or versions. After the version is selected you will then be prompted to Select additional features to install. Here is where you can add additional tools to your container like Git, Angular, or Node.js.
A full list of features can be found here.
In my scenario, I did not need any additional features, so I clicked Ok.
In a few seconds, the files were created, and I was ready to start working.
There was a devcontainer.json
file created in the .devcontainer
folder in the root of the project.
In my case, there was an extra file called post-create.sh
, more on that later. The .devcontainer
folder now looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/jekyll
{
"name": "Jekyll",
"image": "mcr.microsoft.com/devcontainers/jekyll:0-buster",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
// Jekyll server
4000,
// Live reload server
35729
],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "sh .devcontainer/post-create.sh"
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
In the first few lines, outside the comments, you will see a name
and image
property.
The name
is the name of the template and the image
is the image that will be used to create the container.
So, in this example, I am using the Jekyll
template and the mcr.microsoft.com/devcontainers/jekyll:0-buster
image.
Most of the other lines are specific to the type of template you chose, but are relatively common across all templates.
Section | Description |
---|---|
features |
This is where you can add additional tools to your container. More details |
forwardPorts |
This is where you can forward ports from the container to your local machine. |
postCreateCommand |
This is where you can run a script after the container is created. |
customizations |
This is where you can customize the instance of Visual Studio Code that will be used in the container. |
For more details on the devcontainer.json
file, see the specification.
For Jekyll, we need to customize a few things.
First off, we need to forward ports 4000
(Jekyll server) and 35729
(Live reload server).
This can be done by adding the following to the devcontainer.json
file:
1
2
3
4
5
6
{
"forwardPorts": [
4000,
35729
]
}
…which is outlined in lines 11-16 in the first .devcontainers.json
.
Next, we need to make sure that bundler
is installed and all the gems are installed.
This is done with the postCreateCommand
property.
For this property, we execute the post-create.sh
script that was created in the .devcontainer
folder.
1
2
3
{
"postCreateCommand": "sh .devcontainer/post-create.sh"
}
The template provides a default post-create.sh
script that looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/sh
# Install the version of Bundler.
if [ -f Gemfile.lock ] && grep "BUNDLED WITH" Gemfile.lock > /dev/null; then
cat Gemfile.lock | tail -n 2 | grep -C2 "BUNDLED WITH" | tail -n 1 | xargs gem install bundler -v
fi
# If there's a Gemfile, then run `bundle install`
# It's assumed that the Gemfile will install Jekyll too
if [ -f Gemfile ]; then
bundle install
fi
The customizations
property in the devcontainer.json
file is where
you can customize the instance of Visual Studio Code that will be used in the container.
This is where you can add extensions, settings, and more.
For me, there are four extensions that I need to have when working with Markdown and Jekyll:
You can add them to the devcontainer.json
file like this:
1
2
3
4
5
6
7
8
9
10
11
12
{
"customizations": {
"vscode": {
"extensions": [
"yzhang.markdown-all-in-one",
"DavidAnson.vscode-markdownlint",
"bierner.markdown-emoji",
"streetsidesoftware.code-spell-checker"
]
}
}
}
You can also add additional settings to the customizations
section.
Details on Visual Studio Code settings can be found here.
After the .devcontainer.json
file is created you will need to build the container.
You can do this by opening the Command Palette (Ctrl+Shift+P
) and selecting the Dev Containers: Rebuild and Reopen in Container command.
This will build the container and open a new instance of Visual Studio Code in the container.
This will take a few minutes the first time you do it, but will be much faster after that.
You may be prompted by a notification that says:
The git repository in the current folder is potentially unsafe as the folder is owned by someone other than the current user.* If you do get this, click onManage Unsafe Repositories and then click on repository folder.
After the container is built and the postCreateCommand
script, if any, is run,
you should see a message in the terminal, “Done. Press any key to close the terminal.”.
Feel free to close the terminal and start working on your project.
Now with the .devcontainer.json
file in the project, whenever you open the folder in Visual Studio Code,
it will ask you if you want to open the project in a container.
Reopen in Container, will open the container and use map the local files to the container. Clone in Volume will clone the repository into a volume and open the container. This is generally faster.
You can click on the green icon in the lower left corner of the window that says Dev Container:,
along with the name of the container, to close the container.
You can also open the Command Palette (Ctrl+Shift+P
) and select the Remote: Close Remote Connection command.
Off to Bing, I went to see if I could find a setting to change the sort order.
It turns out that there is a setting to change the sort order.
The setting is explorer.sortOrderLexicographicOptions
and has 4 options:
default
- Default sort order (Uppercase and lowercase names are mixed together)lower
- Lowercase first (Lowercase names are grouped together before uppercase names)upper
- Uppercase first (Uppercase names are grouped together before lowercase names)unicode
- Unicode order (Names are sorted in Unicode order)After changing the order to Unicode
the file list was sorted the same way as I had it in JetBrains Rider.
1
2
A connection was successfully established with the server, but then an error occurred during the login process.
(provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)
I could not figure out what was wrong.
It worked a few days ago.
Then one attendee said that he saw something similar last week,
and he had to add another parameter to the connection string, encrypt=optional
.
I added that to the connection string, and it worked.
I was able to continue with the workshop.
However, I was curious why this was happening.
I did some research and found that there was a breaking change to the Microsoft.Data.SqlClient
,
as outlined on the Entity Framework Core 7.0 Breaking Changes page.
As it turns out, when I ran this workshop a week or so ago,
I was using Entity Framework Core
version 6.0 along with the corresponding version of Microsoft.Data.SqlClient
version 6.0
and this change was introduced in Microsoft.Data.SqlClient
version 7.0 which Entity Framework Core
version 7.0 uses.
It turns out the change is important. I/we should have the proper certificates on the SQL Server. That’s the next thing I need to figure out how to do locally, so that I can continue to use the default connection string. However, I’m glad I found the issue and was able to continue with the workshop.
Hopefully, this helps someone else.
]]>Like the definition of Software Architecture, which the author calls out, this book is a bit general. However, it is a good start for someone that does not understand what the role is or does and either wants to become an architect or work with an architect.
Purchase Software Architecture for Web Developers on Amazon.
Note: I was given a free copy of this book in return for an honest review.
]]>Thank you for electing me to the Board of Directors in 2022
For 20 years or so I have been in Software Development. During that time I have used many tools, languages, and technologies. I started out programming with a small book on QuickBASIC. I later moved on to Visual Basic for DOS. Windows then came along and I starting using Visual Basic for Windows, I then migrated to Visual Basic .NET and eventually ended up using Visual C#. I work as an Senior Engineering Director at Rocket Mortgage, based in Detroit, MI. I am a public speaker and present internationally on a lot of different technology topics.
One of my primary goals in life is to leave this leave (planet) better than it was when I joined it. I do that in many ways. My favorite way is to grow people and help make them better. I do this through education; blogging, presenting, and public speaking. I also do this by helping others find ways to help themselves grow. I believe that being on the board of the .NET Foundation, I can help the vision of .NET and make it easier for others to adopt and grow themselves. In the past, I have worked with many other talented professionals while serving on the Board of Directors for INETA to help spread the education and adoption of .NET.
For the past 3 years I have been active with the .NET Foundation serving on the Membership Committee
I feel I am very well suited for the position.
If you are on Twitter, you can follow the curated list of .NET Foundation Board of Directors 2022 Candidates here.
INETA was the International .NET Association. I served on the board for eight years. Two as Director of Marketing, four as President, and two as Vice President.
This book is for beginner to intermediate-level .NET developers who want to employ the latest parallel and concurrency features in .NET when building their applications. It has a lot of real-world examples that you can use in your own applications. You’ll learn best practices to avoid application dead-locks, how to safely update your UI, debug your parallel code, and more.
This book should be in your library if you’re a .NET developer.
Purchase Parallel Programming and Concurrency with C# 10 and .NET 6 on Amazon.
Note: I was a technical reviewer for this book.
]]>ExcludeFromCoverage
attribute :smile:. Well this is mocking comes in.
Mocking is a framework that allows you to create a mock object that can be used to simulate the behavior of a real object. You can use the mock object to verify that the real object was called with the expected parameters, and to verify that the real object was not called with unexpected parameters. You can also verify that the real object was called the expected number of times. You can also verify that the real object was called with the expected parameters and that the real object was not called with unexpected parameters. The possibilities are endless. Mocking comes in three flavors: fakes, stubs, and mocks. The fakes are the simplest. They are used when you want to test the behavior of a class that has no dependencies. The stubs are used when you want to test the behavior of a class that has dependencies. The mocks are used when you want to test the behavior of a class that has dependencies.
For more information on mocking and the differences between stubs, fakes and mocks read the Fakes, Stubs and Mocks blog post.
First, you’ll need a mocking framework to get started. Something like Telerik JustMock or their free version JustMock Lite.
A mocking framework is what you creates the objects and “pretends” to be the object(s) you are testings.
Now that you have a mocking framework, let’s get started with the primary parts of the unit testing process, Arrange, Act, Assert. Arrange, Act, Assert, or AAA, is a common term used to describe the process of setting up the test environment, executing the test, and verifying the results. It’s a best practice in unit testing. Basically, each of your unit tests should have these three parts:
When I write tests, in this case using xUnit, I generally start with this “stub” pattern:
1
2
3
4
5
6
7
8
9
[Fact]
public void GetContact_WithAnInvalidId_ShouldReturnNull()
{
// Arrange
// Act
// Assert
}
The method name is follows a consistent format:
[Fact]
: This is a fact test.public void
: This is a public method.GetContact
: This is the method you are testing._WithAnInvalidId_
: Whatever variables you are using, in this example, an invalid id.ShouldReturnNull
: The expected outcome.While this convention is not required, I tend to use it so when I am looking at the results, or another engineer is looking at the code, he/she can see the intent of the test.
There are a lot of different types of things to mock, like services, databases, queues, and other types of dependencies. For this introductory example, I am going to demonstrate different ways to test a class that requires a dependency of a database. I’ll be using xUnit and Telerik JustMock for these examples.
The project used in this example can be found here. This project is a C# project that was build on my Twitch stream. The application provides a way to manage a list of contacts. It uses a variety of technologies including:
With all these dependencies I needed a way to validate that these dependencies are working as expected. And before you ask, no, I am not testing the functionality of SQL Server, or Azure Storage, or Azure Functions. I am only testing the interaction with this services. That’s where mocking comes in. For the rest of this post, I’ll focus on testing the ContactManager
class and mocking the ContactRepository
.
Before we get started, let’s take a look at the ContactManager
class. The ContactManager
implements the IContactManager
interface. This is what we are testing.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public interface IContactManager
{
Contact GetContact(int contactId);
List<Contact> GetContacts();
List<Contact> GetContacts(string firstName, string lastName);
Contact SaveContact(Contact contact);
bool DeleteContact(int contactId);
bool DeleteContact(Contact contact);
List<Phone> GetContactPhones(int contactId);
Phone GetContactPhone(int contactId, int phoneId);
List<Address> GetContactAddresses(int contactId);
Address GetContactAddress(int contactId, int addressId);
/// Other methods removed for brevity
}
Full code for this class can be found here
We’ll be mocking the ContactRepository
object which implements the IContactRepository
interface.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public interface IContactRepository
{
Contact GetContact(int contactId);
List<Contact> GetContacts();
List<Contact> GetContacts(string firstName, string lastName);
Contact SaveContact(Contact contact);
bool DeleteContact(int contactId);
bool DeleteContact(Contact contact);
List<Phone> GetContactPhones(int contactId);
Phone GetContactPhone(int contactId, int phoneId);
List<Address> GetContactAddresses(int contactId);
Address GetContactAddress(int contactId, int addressId);
/// Other methods removed for brevity
}
While objects that are being mocked don’t need to be interfaces, it certainly helps. The IContactManager
interface is a contract that defines the methods that interact with a contact. The ContactManager
class implements the IContactManager
interface, in this case. However, one thing to note is that the ContactManager
requires an IContactRepository
dependency, which is what we are going to mock. The IContactRepository
interface, defines the contract with the database, which we do not want to test. This is were mocking comes in. We want to be able to test that the logic in the ContactManager
class is working as expected without going back and forth with the database. This allows use to test things like validation or objects on save, returning the correct exceptions when things go wrong, etc.
Let’s start with the most common test. Let’s validate that a call to GetContacts
returns a list of contacts. We’ll start with the simplest test, and then move to more complex tests.
The signature of GetContacts
is:
1
Task<List<Contact>> GetContacts();
If we start with our template from above, we should stub out a test that looks like this:
1
2
3
4
5
6
7
8
public void GetContacts_ShouldReturnLists()
{
// Arrange
// Act
// Assert
}
Now, let’s look at the arrange part. For the arrange part, we need to setup the mocks so that the mocking framework knows what to mimic or mock. Here’s the arrange part for the GetContacts_ShouldReturnLists
method:
1
2
3
4
5
6
7
var mockContactRepository = Mock.Create<IContactRepository>();
Mock.Arrange(() => mockContactRepository.GetContacts())
.Returns(new List<Contact>
{
new Contact { ContactId = 1 }, new Contact { ContactId = 2 }
});
var contactManager = new ContactManager(mockContactRepository);
On line 1, we create a variable, mockContactRepository
that is the mock of the IContactRepository
interface. Line 2, we create a mock of the GetContacts
method. Lines 3-6, we create a list of contacts and tell the the mock framework return this object when a call is made to GetObjects
. Finally, on line 7, we create a new ContactManager
object and pass in the mock IContactRepository
object.
In this case, the act is trivial. We just call the GetContacts
method on the ContactManager
object.
1
var contacts = contactManager.GetContacts();
This should return a list of contacts with two contacts with the ids of 1 and 2.
Let’s validate that the list of contacts has two contacts.
1
2
Assert.NotNull(contacts);
Assert.Equal(2, contacts.Count);
Line 1 is checking that the list of contacts is not null. Line 2 is checking that the list of contacts has two contacts.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[Fact]
public void GetContacts_ShouldReturnLists()
{
// Arrange
var mockContactRepository = Mock.Create<IContactRepository>();
Mock.Arrange(() => mockContactRepository.GetContacts())
.Returns(new List<Contact>
{
new Contact { ContactId = 1 }, new Contact { ContactId = 2 }
});
var contactManager = new ContactManager(mockContactRepository);
// Act
var contacts = contactManager.GetContacts();
// Assert
Assert.NotNull(contacts);
Assert.Equal(2, contacts.Count);
}
There is a method in the ContactManager
called GetContact
which requires an integer as a parameter. In our business case, the identifier of a contact is a positive number (integer). So let’s set up some test that make sure a call to get GetContact
with a negative number returns null
and a call to get GetContact
with a positive number returns a contact.
For this, we’ll use a feature called matchers. Matchers let you ignore passing actual values as arguments used in mocks. Instead, they give you the possibility to pass just an expression that satisfies the argument type or the expected value range. The means that we don’t have to write a test for each possible value. We can just write a test for the range of values. We are going to use the InRange
matcher for our two tests.
For the test, GetContact_WithAnInvalidId_ShouldReturnNull
where we expect a null
return, we would arrange the test like this:
1
2
3
Mock.Arrange(() =>
mockContactRepository.GetContact(Arg.IsInRange(int.MinValue, 0, RangeKind.Inclusive)))
.Returns<Contact>(null);
In this arrangement, we are saying that when a call to GetContact
is made with an argument that is in the range of int.MinValue
to 0, inclusive, we should return null
.
Our act and assert looks like:
1
2
3
4
5
// Act
var contact = contactManager.GetContact(-1); // Any number less than zero
// Assert
Assert.Null(contact);
For the test, GetContact_WithAnValidId_ShouldReturnContact
, we would arrange the test like this:
1
2
3
4
5
6
7
8
9
10
Mock.Arrange(() =>
mockContactRepository.GetContact(Arg.IsInRange(1, int.MaxValue, RangeKind.Inclusive)))
.Returns(
(int contactId) => new Contact
{
ContactId = contactId
});
var contactManager = new ContactManager(mockContactRepository);
const int requestedContactId = 1;
This one required a little bit more work because we needed to specific an object to return, lines 3 to 6, and a value for the contact id, line 9, to validate in our test.
Our act and assert looks like:
1
2
3
4
5
6
7
// Act
// Assumes that a contact record exists with the ContactId of 1
var contact = contactManager.GetContact(requestedContactId);
// Assert
Assert.NotNull(contact);
Assert.Equal(requestedContactId, contact.ContactId);
The GetContacts
method has an overload which expects two string parameters, one for first name and the other for last name. The method, also requires that the first name and last name are not null
or empty. If so, it should throw an ArgumentNullException
. Let’s create a test that validates that a call to GetContacts
with an empty first name and last name throws the exception.
Let’s arrange the test like this:
1
2
3
4
5
6
// Arrange
var mockContactRepository = Mock.Create<IContactRepository>();
Mock.Arrange(() =>
mockContactRepository.GetContacts(null, Arg.IsAny<string>()));
var contactManager = new ContactManager(mockContactRepository);
Here we are passing a null
for the FirstName
parameter and using the Arg.IsAny<string>
matcher for the LastName
parameter which will match any string.
Our act, which is also and asset, looks like this:
1
2
3
// Act
ArgumentNullException ex =
Assert.Throws<ArgumentNullException>(() => contactManager.GetContacts(null, "Guadagno"));
Here we are creating a variable ex
which is of type ArgumentNullException
and then we are asserting that the GetContacts
method throws an ArgumentNullException
when called with the FirstName
parameter set to null
and the LastName
parameter set to Guadagno
.
Then in the assert, we are checking that the exception message is correct.
1
2
3
// Assert
Assert.Equal("firstName", ex.ParamName);
Assert.Equal("FirstName is a required field (Parameter 'firstName')", ex.Message);
Note, JustMock supports an alternative way of asserting that an exception is thrown. We can use Mock.Arrange
to assert that an exception is thrown. We can use the Throws
matcher to assert that an exception is thrown. We can use the Throws<T>
matcher to assert that an exception of a specific type is thrown.
1
2
Mock.Arrange(() => contactManager.GetContacts(null, "Guadagno"))
.Throws<ArgumentNullException>("FirstName is a required field (Parameter 'firstName')");
The complete code for the ContactManager
class can be found here.
The complete code for the ContactManagerTest
class can be found here.
This just scratches the surface of mocking. There are many more ways to mock using a mocking framework like JustMock. Maybe we’ll cover more in a future post.
]]>All of these plugins can be downloaded from either the JetBrains plugin Marketplace or directly in the IDE.
So here is the list.
Plugin | What it does |
---|---|
.env files support | As the name implies, it provides environment variable completion for Dockerfile and docker-compose.yml files |
GitHub Copilot | GitHub Copilot uses OpenAI Codex to suggest code and entire functions in real-time right from your editor. Note: This plugin is free but requires a paid subscription service |
Key Promoter X | Let’s you know if there is a keystroke shortcut for any mouse based IDE commands |
PowerShell | Provides PowerShell intellisense and script execution support for IntelliJ IDEs |
Rainbow Brackets | Provides colored brackets, parentheses, and lines in the IDE |
String Manipulation | Case switching, sorting, filtering, incrementing, aligning to columns, grepping, escaping, encoding… Very helpful when working with tabular data |
Structured Logging | Contains some useful analyzers for structured logging. Supports Serilog, NLog, and Microsoft.Extensions.Logging |
Plugin | What it does |
---|---|
.NET Core User Secrets | Provides the ability to create and edit user secrets in .NET projects |
Plugin | What it does |
---|---|
Application Insights Debug Log Viewer | Provides the ability to view Azure Monitor (Application Insights) telemetry in the IDE |
Azure DevOps | Azure DevOps is a plugin to enable working with Git and TFVC repositories on Azure DevOps Services or Team Foundation Server (TFS) 2015+ |
Azure Toolkit for Rider | Rider plugin for integration with Azure cloud services. Allow to create, configure, and deploy .Net Core and .Net Web Apps to Azure from Rider on all supported platforms |
Plugin | What it does |
---|---|
Presentation Assistant | This plugin shows name and Win/Mac shortcuts of any action you invoke |
Window Resizer | This plugin let you quickly resize windows to any of the following predefined orientations/sizes |
Plugin | What it does |
---|---|
Grazie | Intelligent spelling and grammar checks for any text you write in the IDE |
Grazie Professional | Enhances the base Grazie plugin with advanced writing assistance for English text in your IDE. Grazie Professional is a result of the latest developments in deep learning and natural language processing |
Like I started out with, these are the plugins I use every day for coding, presenting, and writing. Do you have a favorite not listed? Let me know.
]]>I had around 20 versions of .NET SDKs on my machine.
And close to 20 versions of .NET Runtime on my machine.
I decided to clean up some older versions of .NET SDKs and Runtime. This is where the .NET Uninstall Tool comes in handy. This tool allows you to see older versions of .NET SDKs and Runtime and uninstall them.
You can download the tool here. There are instructions for installation for both Windows and macOS.
Once the tool is installed, you can use it to see the versions of .NET SDKs and Runtime that are installed on your machine by executing the following command:
1
./dotnet-core-uninstall list
This will show you a list similar to the images above.
If you are not ready to uninstall a particular version of the .NET SDK or Runtime, you can use the following command to see what would happen if you were to uninstall that version:
1
./dotnet-core-uninstall dry-run
There are quite a few different ways to uninstall the .NET SDKs and Runtime’s.
For each option you need to choose either the sdk
, runtime
, aspnet-runtime
, or hosting-bundle
.
You then need to specify which versions you want to uninstall.
There are options for all
, all-but-latest
, latest
, and many more.
You can see the full list of options here.
I chose to use the all-previews-but-latest
option, this will uninstall all the previews .NET SDKs and Runtime versions except the latest version of the specific preview.
And the all-but-latest
option, which will uninstall all the .NET SDKs and Runtime versions except the latest version of the major version.
After running each of the commands, my machine will look like the image below.
Here is the script I used to uninstall the .NET SDKs and Runtime.
Note: the script requires elevated permissions to run.
1
2
3
4
sudo ./dotnet-core-uninstall remove --all-previews-but-latest --sdk
sudo ./dotnet-core-uninstall remove --all-previews-but-latest --runtime
sudo ./dotnet-core-uninstall remove --all-lower-patches --sdk
sudo ./dotnet-core-uninstall remove --all-lower-patches --runtime
1
2
3
4
./dotnet-core-uninstall remove --all-previews-but-latest --sdk
./dotnet-core-uninstall remove --all-previews-but-latest --runtime
./dotnet-core-uninstall remove --all-lower-patches --sdk
./dotnet-core-uninstall remove --all-lower-patches --runtime
It was pretty easy to unclutter my machine of the older versions of .NET SDKs and Runtime once I had the Uninstall Tool installed and ran the script. Now, I have to remember to run this script every once in a while.
]]>Note: When I started creating this post, Microsoft is/was in the process of renaming Microsoft Identity to Microsoft Entra. As a result, some of the links in this post might change in the future. :slightly_frowning_face:
I’m going to assume that you already have an API secured with Microsoft Identity, if not, you can check out a series I put together previously called Protecting an ASP.NET Core API with Microsoft Identity Platform.
I’m also going to assume that you already have swagger configured. If not, check out this
First step is to add the security requirements, AddSecurityRequirement
, and security definitions, AddSecurityDefinition
to the Swagger configuration.
Locate in your application code, typically in the Program.cs
, the AddSwaggerGen
method and add the following code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
// Enabled OAuth security in Swagger
var scopes = JosephGuadagno.Broadcasting.Domain.Scopes.ToDictionary(settings.ApiScopeUrl);
scopes.Add($"{settings.ApiScopeUrl}user_impersonation", "Access application on user behalf");
c.AddSecurityRequirement(new OpenApiSecurityRequirement() {
{
new OpenApiSecurityScheme {
Reference = new OpenApiReference {
Type = ReferenceType.SecurityScheme,
Id = "oauth2"
},
Scheme = "oauth2",
Name = "oauth2",
In = ParameterLocation.Header
},
new List <string> ()
}
});
c.AddSecurityDefinition("oauth2", new OpenApiSecurityScheme
{
Type = SecuritySchemeType.OAuth2,
Flows = new OpenApiOAuthFlows
{
Implicit = new OpenApiOAuthFlow()
{
AuthorizationUrl = new Uri("https://login.microsoftonline.com/common/oauth2/v2.0/authorize"),
TokenUrl = new Uri("https://login.microsoftonline.com/common/common/v2.0/token"),
Scopes = scopes
}
}
});
You can also view the code here.
Most of this code does not need to change except the scopes
variable (lines 1 and 2) and the AuthorizationUrl
and TokenUrl
(lines 25 and 26).
The scopes
variable is a dictionary that maps the scope name to the scope description. The scope description is used in the swagger UI to describe the scope. This should be a list of scopes that you want your Swagger UI to have access to.
BTW, this should only been done in a development and/or testing environment. In most cases, you will not want to enable the Swagger UI in production.
The AuthorizationUrl
and TokenUrl
may change depending on the tenant and/or organization type you selected when you created your application. You can find the correct values on the Register an Application page.
First, we’ll need to get client secret for the Swagger UI so that the application can authenticate users to the API. We will need to go to the application in the whichever Azure Active Directory it is registered in. In this example, I have an application named “JosephGuadagno.NET Broadcasting (Test) - API” registered in my Default Directory.
Now let’s see how to get the client secret.
You should see something similar to the following:
Next, click on the Certificates & secrets menu item. After that, click on the Client Secrets tab item, then + New client secret. All highlighted in red in the image below.
In the dialog box that follows, enter the following:
Field | Description | Value |
Description | A description of the client | SwaggerClient |
Expires | How long the client secret will be valid for | 6 months |
After it is done creating the client secret, you should see something similar to the following:
NOTE: Copy the client secret and store it securely. You’ll need it for the next step and once this blade closes, you cannot access the client secret again.
Locate in your code the UseSwaggerUI
method and add the following code:
1
2
3
4
5
6
7
app.UseSwaggerUI(options =>
{
options.OAuthAppName("Swagger Client");
options.OAuthClientId("<Your client id>");
options.OAuthClientSecret("<Your client secret>");
options.OAuthUseBasicAuthenticationWithAccessCodeGrant();
});
Now, you’ll need to replace the <Your client id>
with the client id of the registered application. In the example above it is labeled as Application (client) ID and starts with 027edf. The <Your client secret>
is the client secret you copied earlier.
If everything is configured correctly, once you start your API and navigate to the Swagger UI, you should see something similar to the following:
Clicking on the Authorize button will bring up a page showing you some information about the application, including the name, authorization url, client_id, and scopes being requested.
Authorize
button to start the sign in process.The Microsoft Identity will redirect you to the authorization url. Once you are redirected, you will be able to sign in with your Microsoft account. And you will be presented with a page that shows you the permissions that you have selected and allows you to authorize this application to access those permissions.
The Swagger UI will let you know that it received the authorization.
Now the Swagger UI will look like this:
So with a little work, we can see how easy it is to use the Swagger UI to authenticate users to the API. This is a great way to test out the API and make sure that it is working as expected.
Most of the code here can be just copied and pasted into future projects. The only thing you would need to do is tweak the scopes and update the client id/secret.
This post is based on the following products:
Product | Version | Download Link |
---|---|---|
Visual Studio Code | 1.67.0 | Download |
GitHub Copilot | 1.7.4421 | Sign up |
GitHub Copilot Extension | 1.7.4421 | Install |
Note: Github Copilot is in Technical Preview when this post was written and is subject to change.
Now some of the suggestions that GitHub Copilot makes are a little freaky as to how good the predictions are. I’ll cover some of them in this post.
Before I show you some off them, I wanted to show you that GitHub pilot started working on the post before I even write the first word. When I start writing a post, I first start out with file name which is typically formatted yyyy-mm-dd-title-of-post.md
, which by the way this was suggested by GitHub Copilot. :smile: The file name for this post is this-post-was-written-with-github-copilot.md
. After the file is created I start working on the header or metadata for the post. Once I started typing title: :
for the header or metadata of this post, GitHub Copilot suggested the title of the post.
This is what a typical header looks like:
1
2
3
4
5
6
7
8
9
10
11
12
---
title: "This post was written with GitHub Copilot"
header:
og_image: /assets/images/posts/header/github-copilot-writing.png
date: 2022-01-13 17:30:00 -0700
categories:
- Articles
tags:
- GitHub
- CoPilot
- blog
---
I recorded a video while I was writing this post so you can see, if you wish, how I use GitHub CoPilot to write this posts. If you watch the video, you will see my typing errors but more so how GitHub Copilot suggests the text for this post.
Now let’s look at some of the suggestions that GitHub Copilot has made.
This blog is written using Markdown, more on this in a future post. In a previous post, I created a list of items I wanted to cover. Sample list below:
1
2
3
4
5
6
7
8
## Presentation
C# 10 language features
ASP.NET Changes
Maui, no, not the beach
Performance improvements
New APIs
Other enhancements
Once I added the list above and start typing ###
to create a new heading Github Copilot is going to suggest the next item in the list.
1
2
3
### C# 10 language features
### ASP.NET Changes
### Maui, no, not the beach
Minute marker: 3:21 to 6:38
Github Copilot is going to suggest items from the clipboard that make sense. I am going to copy the url to this blog josephguadagno.net
to the clipboard, then create a link in the document.
Link to the blog: josephguadagno.net
That’s pretty cool.
Minute Marker: 8:28 to 8:40
After I typed the ##
to indicate a H2 heading, Github copilot suggested that I name it Tip 3: Markdown Preview
but if I have labeled my headers as Tip 1
, Tip 2
, etc is I had in the draft of this post all I would have had to do is hit Tab
and Enter
.
Minute Marker: 8:55 to 9:07
Github Copilot is going to suggest patterns that make sense. I am going to create a pattern in the document.
DistinctBy/InsertBy/RemoveBy/UpdateBy/WhereBy
After I typed DistinctBy/InsertBy/
, after I type Remove
the By/
was added. Github Copilot recognized the *By/
pattern and suggested the next item in the list.
Minute Marker: 12:10 to 12:35
Validate URLs, it makes assumptions based on previous URLs or patterns that might not match the actual URL.
As I discover more about the features of GitHub Copilot, I will add to this post.
]]>Visual Studio has had its first major release in about 18 months (depending on how you look at it) :smile:. This release adds a ton of new features and capabilities to the IDE. Now, is a great time to start learning about them.
The user experience of the IDE has been updated to be more consistent and more user friendly. This includes new icons, new fonts, new personalization, and more.
The icons have been updated to be consistent across different icons while remaining legible and familiar to the user.
You’ve probably noticed in the image above that there are icons for a light and dark theme. While themes are not new to Visual Studio, Visual Studio now offers you the ability to sync your Visual Studio theme with your operating system theme.
The dark theme has been updated also to better align with the Microsoft design guidelines and improve accessibility.
Visual Studio now includes a Theme Converter which converts Visual Studio Code themes to Visual Studio themes.
Visual Studio now includes inlay hints for code completion, code lens, and more. Inlay hints can display parameter name hints for literals, function calls, and more.
In this image, you can see that Visual Studio tells you that the type for variable imageUrl
is string
and contact
is of type Contact
. Further down the image, the RedirectToAction
method has a parameter named actionName
which this sample is using the Details
action.
Note, this feature is not on by default. You can enable it by going to the Tools > Options > Text Editor > C# or Basic > Advanced then select Inlay Hints.
You might be saying that all these user interfaces are nice but Visual Studio is slow enough already. Well, that might have been the case for earlier versions of Visual Studio but that is not the case for Visual Studio 2022. It’s faster in part that Visual Studio 2022 is now a 64-bit application. This means that the main process (devenv.exe) is no longer limited to 4GB of memory. Now Visual Studio can load larger projects and load more projects at once. You’ll also avoid the “Out of memory” errors that Visual Studio was seeing before when opening large solutions, files, or objects into memory.
Solution loading and file searching is now faster as well. Visual Studio now stores additional information about the solution in the .sln file. This information is used to speed up the loading of the solution. This information is also used to speed up the file searching.
To continue on the speeding up Visual Studio theme, Microsoft also improved the Fast up to date feature to better check to see if a project or it’s dependencies are up to date or need to be rebuilt.
Visual Studio 2022 has added and enhanced the debugging features of Visual Studio.
Let’s talk about breakpoints first. There are two new breakpoints that you can set in Visual Studio, temporary and dependent breakpoints, as shown in the image below.
The Temporary breakpoint is used to set a breakpoint that will only break once. Once Visual Studio hits that breakpoint, it deletes it. This is helpful if you want to set a breakpoint only to validate that something is working, and you aren’t debugging the code.
The Dependent breakpoint is used to set a breakpoint that will only break when another breakpoint is hit.
Previous versions of Visual Studio added a feature called “Run to Cursor”. This feature was used to execute code up to the code at the cursor.
However, if you had any breakpoints between where you were and where you wanted to run to, Visual Studio would stop at all those breakpoints. Now with Force Run To Cursor, you can run to the cursor without hitting any breakpoints. If you hold the shift key down while hovering over the Run to Cursor glyph, Visual Studio will change the glyph to a Force Run To Cursor glyph and will run to the cursor without hitting any breakpoints.
The Force Run to Cursor is also available in the Debug menu.
For more on breakpoints or debugging tips and tricks in Visual Studio, check out this video:
IntelliCode improves IntelliSense by using AI to help you find the right code completion. IntelliCode is context aware and will help you find the right code completion when you are typing a method call, a property, or a variable.
In the image below, I start to create a new method after the GetContactsAsync
method. After I type public async
, IntelliCode is inferring that I want to create a DeleteContactAsync
method with a parameter of type contactId
. If that is what I want, I can hit the Tab
key twice to insert the suggestion.
Multiple repository support which includes the ability to track changes across all the repositories in a project. If you open a solution that has multiple Git repositories in it, Visual Studio will connect/activate those repositories. Right now, this is limited to a max of 10 repositories. You will be able to tell if Visual Studio has connected to or activated your different Git repositories by looking at the repository picker on the status bar (located at the lower right corner), which will tell you the number of active repositories you have.
The Git integration with Visual Studio has been improved and include support for multiple repositories, including improvements to both the Solution Explorer and Code Editors.
Hot Reload is a feature of Visual Studio that allows you to modify your applications managed code while that application is running without the need to hit a breakpoint or pause the application. This is a cool feature that will save you a lot of time without pausing or stopping your application to see how the source code changes you made changed your application. However, the support for this feature is still in progress. There are some scenarios and products that are not yet supported.
Visual Studio 2022 for Mac is coming. The Visual Studio team wants to make a modern .NET IDE tailored for the Mac that will look familiar to those using Visual Studio for Windows while using native macOS UI. For more on the Visual Studio 2022 for Mac and/or to join the private beta, please visit here.
While technically not released with Visual Studio 2022, Microsoft released .NET 6 at the same time and includes the .NET 6 SDK in the Visual Studio installation. So now is the time to start migrating your .NET 5, and earlier, projects to .NET 6. As Barry Dorrans @blowdart points out, .NET 5 moves to end of life in May of 2022.
Some more details on the support policy for .NET.
Version | Original Release Date | Latest Patch Version | Patch Release Date | Support Level | End of Support |
---|---|---|---|---|---|
.NET 6 | November 08, 2021 | 6.0.0 | November 08, 2021 | LTS | November 08, 2024 |
.NET 5 | November 10, 2020 | 5.0.12 | November 08, 2021 | Current | May 08, 2022 |
.NET Core 3.1 | December 3, 2019 | 3.1.21 | November 08, 2021 | LTS | December 3, 2022 |
Source: .NET Support Policy
So, what’s stopping you from upgrading your IDE and version of .NET?
]]>In a previous post, I talked about all of the new features of C# 9. With the release of .NET 6 recently, I wanted to share some of the new language features of C# 10.
Let’s take a look at some of the new language features.
.NET added quite a few features to the language that can save you a lot of time.
In my opinion, file-scoped namespaces are a great way to organize your code. They allow you to organize your code into logical groups and keep your code from being too cluttered.
File-Scoped namespaces allow you to save some keystrokes and indentation in your code. Now, you can declare your namespace at the top of your file, assuming you only have one namespace for your file. Which I believe you should always do.
Old Code:
1
2
3
4
5
6
7
8
9
10
namespace MyNamespace
{
class MyClass
{
public void MyMethod()
{
// ...
}
}
}
Now becomes:
1
2
3
4
5
6
7
8
9
namespace MyNamespace;
class MyClass
{
public void MyMethod()
{
// ...
}
}
Now we save 2 curly braces and 1 indentation level. I kind of wish this feature was in .NET 1, since you really should only have one namespace per file :smile:.
How often do you see or type the same namespaces over and over again? using System;
, for me, is declared in almost every file in my project. With C# 10s {:target=”_blank”} you can declare your using directives at the top of your file and then use them throughout your file. Now I can add global using System;
to one file in my project, and the using
statement will be referenced throughout all my files/classes.
I see myself using the following code in my project regularly now:
1
2
3
global using System;
global using System.Collections.Generic;
global using System.Linq;
While not required, I recommend that you place all of your global using directives in a standard filename across your projects. I plan on using GlobalUsings.cs
but feel free to use whatever you want.
If putting your global using
directives in a file is not your thing, you can also add then to your .csproj
file. If I wanted to include the three global using
directives above in my .csproj
file, I would add the following to my .csproj
file:
1
2
3
4
5
<ItemGroup>
<Using Include="System" />
<Using Include="System.Collections.Generic" />
<Using Include="System.Linq" />
</ItemGroup>
Either approach will work, but the .csproj
approach seems to be easier to discover.
If global using
is not your or your teams thing, you can disable it by adding the following to your .csproj
file:
1
2
3
<PropertyGroup>
<ImplicitUsings>disable</ImplicitUsings> // Can also be set to `false`
</PropertyGroup>
Pattern Matching was introduced in C# 7. It allows you to match the properties of an object against a pattern. Pattern matching is a great way to write cleaner code. In C# 8, the Property Patterns feature was added, which enabled you to match against properties of an object like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Person person = new Person {
FirstName = "Joe",
LastName = "Guadagno",
Address = new Address {
City = "Chandler",
State = "AZ"
}
}
// Other code
if (person is Person {Address: {State: "AZ"}})
{
// Do something
}
Now with C# 10, you can reference nested properties of objects with dot notation. For example, you can match against the City
and State
properties of a Person
object like this:
1
2
3
4
if (person is Person {Address.State: "AZ"})
{
// Do something
}
C# 10 made improvements to interpolated strings in C# 10. const
variables can now be used with interpolated strings.
I have trouble finding a “real world” example of this, so here is an example of how it works:
1
2
3
const string greeting = "Hello";
const string name = "Joe";
const string message = $"{greeting}, {name}!";
The message
variable will be the value of Hello, Joe!
.
Interpolated has not just been improved for const
s but for variables that can be determined at compile time. Let’s say you maintain a library, and you decide to obsolete a method named OldMethod
. In the past, you would have to do something like this:
1
2
3
4
5
6
7
public class MyClass
{
[Obsolete($"Use NewMethod instead", true)]
public void OldMethod() { }
public void NewMethod() { }
}
But now, you can do this:
1
2
3
4
5
6
7
public class MyClass
{
[Obsolete($"Use {nameof(NewMethod)} instead", true)]
public void OldMethod() { }
public void NewMethod() { }
}
This makes it easier to update your code when you need to. Now you don’t have to remember everywhere you used hardcoded name of the method you want to obsolete.
CallerArgumentExpression
attribute is a new feature of C# 10 that enables you to capture the expression that is passed into a method which is useful for debugging purposes.
Let’s say we have a method called IsValid
that checks and validates assorted properties of a Person
object.
1
2
3
4
5
6
7
8
9
10
11
public static class Validation {
public static book IsValid(Person person)
{
Debug.Assert(person != null);
Debug.Assert(!string.IsNullOrEmpty(person.FirstName));
Debug.Assert(!string.IsNullOrEmpty(person.LastName));
Debug.Assert(!string.IsNullOrEmpty(person.Address.City));
Debug.Assert(person.Age > 18);
return true;
}
}
Now we have the following code that calls the IsValid
method:
1
2
3
4
5
6
7
8
9
10
11
12
13
Person person;
var result = Validation.IsValid(person); // Fails: person != null
Person person = new Person{
FirstName = "Joe",
LastName = "Guadagno",
Address = new Address {
City = "Chandler",
State = "AZ"
},
Age = 17
};
result = Validation.IsValid(person); // Fails: person.Age > 18
Each call to will fail because at least one assertion fails. But which one failed? That is where CallerArgumentExpression
comes into play. To fix this, we’ll create a custom Assert
method and add the CallerArgumentExpression
attribute to the method:
1
2
3
4
5
6
7
public static void Assert(bool condition, [CallerArgumentExpression("condition")] string expression = default)
{
if (!condition)
{
Console.WriteLine($"Condition failed: {expression}");
}
}
Now if we call the Validate
method with the above sample, we’ll get the following output
Condition failed: person != null
and
Condition failed: person.Age > 18)
The introduction of CallerArgumentExpression
attribute has enabled a few new extensions methods to the framework. For example, there is now a ThrowIfNull
extension method that can be used to throw an ArgumentNullException
if the argument is null.
We no longer have to write this:
1
2
3
4
if (argument is null)
{
throw new ArgumentNullException(nameof(argument));
}
We can now write this:
1
ArgumentNullException.ThrowIfNull(argument);
The method, behind the scenes, looks like this:
1
2
3
4
5
6
7
8
9
public static void ThrowIfNull(
[NotNull] object? argument,
[CallerArgumentExpression("argument")] string? paramName = null)
{
if (argument is null)
{
throw new ArgumentNullException(paramName);
}
}
This is not an exhaustive list of new language features introduced in C# 10. To see what else was added to C# 10, check out What’s new in C# 10.0
]]>I’ve been a software engineer for 20+ years, and as the adage goes, You can’t teach an old dog new tricks. However, if there is one thing I learned in the 20+ years is that I am ALWAYS learning. There are always new technologies coming out, new languages, and new products to solve complex problems. .NET 5 introduced C# 9, which had many new language features. So it was time for me to learn some new tricks and I dove into .NET 5’s C# 9 language additions.
After using these new language features, keywords, and syntax, I noticed that they started to save me keystrokes and time. Since these language additions helped me I wanted to share them with you.
Let’s take a look at some of the new language features.
The new record
keyword defines a reference type that provides some built-in functionality for representing data. You might be thinking that this sounds a lot like a class
, and you would be correct. It does. However, the intent is to provide smaller and more concise types to represent immutable data. I like to think of them as a type used primarily to transfer data and not have a lot of methods or data manipulation.
More on C# 9 records.
There are a few different ways to define a record
. The simplest form is:
1
public record Person(string FirstName, string LastName);
At first glance, at least for me, that seemed weird. It has a method look and feel. There is even a semicolon at the end. But, the above line creates a Person type with the read/write properties of FirstName
and LastName
. You can access the Person as follows:
1
2
3
var person = new Person("Joseph", "Guadagno");
Console.WriteLine(person.FirstName); // Outputs Joseph
Console.WriteLine(person.LastName); // Outputs Guadagno
So far, this looks very class
-like. Well, it is, except for the declaration. We already saved a bunch of keystrokes. But let’s dig more into it.
Another way to define the Person record
is more class
-like:
1
2
3
4
5
public record Person
{
public string FirstName { get; set;}
public string LastName { get; set;}
}
You can further reduce some typing and remove some boilerplate code using the new positional syntax for records. For example, if you wanted to declare a variable with the class approach and initialize it with data, you would do something like this.
1
var person = new Person { FirstName = "Joseph", LastName="Guadagno"};
With positional syntax, that would look like this.
1
Person person = new ("Joseph", "Guadagno");
That’s 26 fewer characters. Behind the scenes the compiling is creating a lot of the boilerplate code for you. The compiler creates a constructor that matches the position of the record declaration. Since the FirstName
property was the first property declared when we defined the method, it assumes that the Joseph value should be the value of the FirstName
property. The compiler also generated all the properties as init-only, more on that later, meaning the properties can not get set after initialization making them read only.
One set of built-in functionality that records provide is value equality. When checking to see if two records are equal, it will look at the values of each of the properties and not the reference.
Assuming the definition of.
1
public record Person(string FirstName, string LastName);
When comparing records
1
2
3
4
5
Person person1 = new ("Joseph", "Guadagno");
Person person2 = new ("Joseph", "Guadagno");
Console.WriteLine(person1 == person2); // outputs True
Console.WriteLine(ReferenceEquals(person1, person2)); // outputs False
Since person2
has the same FirstName
and LastName
of person2
they are equal, although the references are not.
Using the record
keyword, gets you another built in method. What a deal! An improved ToString
method. I really wish this was opt-in standard for classes to.
The ToString
method outputs the following format.
<record type name> { <property name> = <value>, <property name> = <value>, ...}
For a record defined as
1
public record Person(string FirstName, string LastName);
and initialized as
1
Person person = new {"Joseph", "Guadagno"};
the ToString
method would output a string like
Person { FirstName = Joseph, LastName = Guadagno }
If there is a reference type as one of the properties of the record, the records ToString
implementation will output the type name of it.
NOTE Don’t try to use the ToString
method to determine the records properties.
Records can be inherited the same way classes are except for the following:
Copying records is pretty easy. As an added bonus, the syntax makes the code easier to read.
Let’s say I had a Person record defined as.
1
2
3
4
5
6
public record Person
{
string FirstName { get; set;}
string LastName { get; set;}
string HomeState { get; set;}
}
Let’s also say I want to create one Person and make multiple copies and just change a few properties. As if I was to create variables for the whole family. In our case, the LastName
and HomeState
properties are the same and using records along with the with
keyword makes this easier.
1
2
3
4
var me = new Person("Joseph", "Guadagno", "Arizona");
var wife = me with {FirstName = "Deidre"};
var son = me with {FirstName = "Joseph Jr."};
var daughter = me with {FirstName = "Emily"};
Now, the wife
, son
, and daughter
objects have the property of LastName
set to Guadagno and HomeState
set to Arizona.
You can also use the new init
keyword to make certain properties settable on initialization only. The init
keyword works with properties or indexers in struct
, class
, or record
.
Let’s say with want to define a Person record
with FirstName
, LastName
, and CreateOnDate
properties. The CreatedOnDate
should not be editable after the record is initialized. We would declare the record
like this.
1
2
3
4
5
6
public record Person
{
public string FirstName { get; set;}
public string LastName { get; set;}
public DateTime CreatedOnDate { get; init;}
}
You see on line 5, we have the keyword init
instead of set
. This means the CreatedOnDate
can only be set when initialized.
1
var person = new Person("Joseph", "Guadagno", DateTime.Now());
After declaring this record, we are limited as to what properties we can change.
1
2
person.FirstName = "Joe"; // valid
person.CreatedOnDate = DateTime.Now(); // You will get a compile error
Line 2, will cause a compilation error because the property CreatedOnDate
was set to init-only.
You can also declare the setter of a property with a backing field as init-only.
1
2
3
4
5
6
7
8
9
10
11
public class Person
{
private readonly DateTime _dateOfBirth;
public DateTime DateOfBirth
{
get => _dateOfBirth;
init => (value ?? throw new ArgumentNullException(nameof(DateOfBirth)));
}
public string FirstName { get; set;}
public string LastName { get; set;}
}
On line 7, we define the class Person with an init only property DateOfBirth
that must be set at initialization or you will get a compile error or runtime exception depending on the implementation.
This is valid (assuming the definition above).
1
var person = new Person{FirstName="Joseph", LastName="Guadagno", DateOfBirth=DateTime.Now()};
This is not (assuming the definition above).
1
var person = new Person{FirstName="Joseph", LastName="Guadagno"};
Based on the above definition, this code sample will throw a runtime exception.
I started out this post introducing the notion that C# 9’s language features help you be more productive and reduce keystrokes. Top Level statements is another one of the features. To be honest, you probably won’t use this feature a lot. In fact, you can only have one file in your application that uses this feature. It’s generally helpful for demonstrating some functionality and removing all of the extra ceremony around the application startup. I see myself using it when I am creating presentations.
Let’s take the typical “Hello World” sample.
1
2
3
4
5
6
7
8
9
10
11
12
using System;
namespace CSharp9Features.ConsoleApp
{
static class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
}
}
It’s 12 lines long using the default .NET C# console app template. Now with top level statements, this can reduced to.
1
System.Console.WriteLine("Hello World");
Now “we” reduced the code from 12 lines and 210 characters to 1 line and 40 characters.
Behind the scenes the compiler essentially created the 12 lines and 210 characters for you. But again, C#9 is trying to make things easier for you so why type those lines when the compiler knows that is what you want.
In a more “realistic” example, let’s say for an ASP.NET Core WebAPI project. The typical template would have a Program.cs
file that looks something like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace Contacts.Api
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); })
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
});
}
}
Now with C# 9, I can remove some of the noise and ceremony and have my code just be what my API needs to start.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using Contacts.Api;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
CreateHostBuilder(args).Build().Run();
IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); })
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
});
This code now clearly states what the intent of the program.cs
is without the extra namespace
or Main
method.
While pattern matching is not new to C# 9, C# 9 did add a few more patterns.
Logical patterns:
and
or
not
Relations patterns:
<
>
<=
>=
These patterns help add readability to code. My favorite addition to this is the not
pattern matcher. Now I can take all the instances of
1
if (!person is null)
and make them more readable with
1
if (person is not null)
While this one is more keystrokes, the extra couple of characters makes it more readable to me than the !
operator.
The compiler is getting smarter. It’s not necessarily getting more intelligent, but getting better at understanding what you are trying to do and, again, reducing the keystrokes. The C# 9 feature of target-typed new expressions demonstrates that the compiler is getting smarter. Now, based on the variable declaration or method signature, you can omit the type in variable declarations or usage.
Here we are declaring a variable _people
of type List<Person>
1
private List<Person> _people = new();
You no longer have to initialize the variable of _people
with new List<Person>()
. The compiler can assume that you want a new List of Person.
The same goes for methods. In the sample below, the method CalculateSalary
expects a parameter of type PerformanceRating
.
1
2
3
4
public Person CalculateSalary(PerformanceRating rating)
{
// Omitted
}
If I wanted to initialize an new PerformanceRating
object for the method without creating a variable, I can now.
1
var person = person.CalculateSalary(new ());
or, I can pass in a new PerformanceRating
object with one or more of it’s properties initialized.
1
var person = person.CalculateSalary(new () {Rating ="Rock Star"});
This syntax does take some getting used to. I think in the long it leads to code that is easier to use. However, it might add more fuel to the var
vs. typed variable declaration debate. :)
Wow, that was a lot. C#9 added Record Types, Init Only setters, Top-Level programs, enhancements to pattern matching, and more.
I hope you take some time and play around with these new language features. Doing so will reduce your keystrokes and help your code to be readable in the long run.
While not set in stone… As of the writing of this post, .NET 6 preview 5 is planing on adding the following to C# 10.
const
interpolated strings.ToString()
.AsyncMethodBuilder
attribute on methods.For more, check out What’s new in C# 10.0
]]>The book starts with getting an understanding of what software architecture is.
After introducing architecture, the author gets into designing and building software using Microsoft tools and services. Including how you can use Azure DevOps to document requirements, figuring out which Azure services to use to host and develop your application.
From there, you build an application from scratch, introducing architecture, design concepts, and other things to consider along the way. Finally, the last few chapters cover unit testing and functional testings, and building a CI/CD pipeline.
At the end of the book, you would have been exposed to ways to build highly scalable applications using Microsoft development tools and Microsoft Azure.
Overall, the other covers a lot of topics but scratches the surface with some of them.
On a side note, I like the Questions and Further Readings section that this book has. It’s an excellent way to test if you picked up what the author wrote in the chapter. So add that with the Further Readings section to help the author drive home the point and provide the reader with additional reference materials.
Purchase Software Architecture with C# 9 and .NET 5 on Amazon.
]]>NOTE: These tips are based on an early access preview. The images and functionality could change between releases including the final release. This was based on Windows 11 Pro, Update 21H2, Build 22000.65
By default, the Windows Taskbar, is center aligned. However, you can change that in the settings. To open the settings, press Windows Key + S
to bring up the Windows Search, and in the text box type settings
. The Settings application should look like this.
Once the Settings application is open, click on Personalization. Then expand Taskbar behaviors. NOTE, you may have to scroll for this.
Now, you can set the taskbar alignment to the left by changing the Taskbar Alignment
setting. No need to click a Save button, the saving is automatic.
The Start Menu had a few user interface changes.
You can make some changes to it. Sorry, you can’t bring back Live Tiles. :disappointed: However, you turn on and off the recent apps, most used apps, and recent apps settings. Open up the Settings application, chose Personalization, then Start.
If you click Folders, you can customize the Start Menu by adding the Windows special folders next to the power button. You can add the following “Special Folders” to the Start Menu.
I added, Settings, File Explorer, Downloads, and Personal folder. Now the bottom of my Start Menu looks like this.
It’s late and you are about to call it a night, you look in your tray to see what time it is and notice the update icon in your task tray that indicates Windows has an update to apply.
You start to think to yourself, I wonder how long is this going to take?. Luckily you now no longer have to guess, Windows 11 guesses for you. Click the power icon and you are prompted with estimates as to how long Windows should take to apply the updates and restart or shutdown.
Let’s just hope it never says ‘5 seconds remaining’ :smile:
The Search user interface has been revamped also. You can click the Windows Key + S
or the magnifying glass on the Taskbar to open it.
However, if you hover over the magnifying glass you will get a smaller window.
This window has a text box to enter your search terms and the most three recent applications or searches you performed.
Snip & Sketch allows you to quickly annotate screenshots, photos, and other images with your pen, touch, or mouse and save, paste or share with other apps. It is included in Windows 11, or you can download it for Windows 10 or the Xbox One.
The keyboard shortcut of Windows Key + Shift + S
will start a snipping session with Snip & Sketch.
For more on Snip & Sketch, check out Windows 11 Forum.
The Quick Settings has been revamped. The Quick Settings can be accessed by clicking on the icons in the task tray, typically the battery, network, and sound icons are visible in the task tray.
Upon clicking the task tray you should see something like this
Now clicking on the gear icon will bring up the Settings application. Clicking on the ‘Pencil’ icon will allow you to add and remove buttons to the Quick Settings panel.
From here, you can click the pin icons, noted by the arrow, to remove an item. To add an item, click the + Add button. That brings up a menu similar to this.
NOTE: This menu can vary depending on your hardware and configuration.
Technically not new to Windows but it has been drastically improved. Pressing Windows Key + V
brings up the Clipboard History Window.
The Clipboard History window allows you to paste different types of data, emoji’s, GIFs, equations, contents of the clipboard, and not just the last item, and a lot more.
If there are items that you want to keep on the clipboard, press the pin icon to save them.
Every day I use Windows 11, I find something new about it. For more on Windows 11, you can check out my previous post, Windows 11 - A first look.
]]>The HTTP Client supports GET, POST, and most HTTP verbs. It even has support for converting cURL commands.
I’ll be using the API that can be found GitHub at https://github.com/jguadagno/Contacts for the examples.
If you don’t have the plugin enabled, enable it. :smile: You can enable it by going into the Settings or Preferences in Rider and selecting Plugins
. You can also get there from Navigate then Search Everything… or CTRL+T
and type Plugins. Once the plugin is enabled you can add an HTTP request file to your solution or as a scratch file. I tend to add them as part of the solution so any one working on the solution can use them. Scratch files, for me, are more of a temporary file that I use for ‘one off’ requests.
In the Solution Explorer for Rider in one of the projects, you can right click and choose Add… then HTTP Request.
This will add a new, blank, editor window for you to add the HTTP calls that you wish to make.
The HTTP Request editor has a separate set of commands available to it.
The HTTP request can contain a few “arguments”
A request would look something like this.
1
2
3
4
Method Request-URI HTTP-Version
Header-field: Header-value
Request-Body
In this sample request, you will see Rider adds a play button (highlighted with the red triangle).
The request itself, line 1, is a GET request. Where you specify the verb and the URI to call.
Line 2 is a log file that was generated. You can click on the file name to see the results. If you execute the request multiple times, you will see one file for each request. Rider even provides you with the ability to compare the responses. This is helpful for testing.
Line 3 is important! The ###
indicates that this ends the request.
Since a Post request generally has data that goes along with it, the HTTP Request supports that. Simply add the Content-Type
and body in the lines after the verb url line.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
POST https://localhost:5001/contacts
Content-Type: application/json
{
"FirstName": "Joe",
"LastName" : "Guadagno",
"MiddleName" : "James",
"EmailAddress" : "jguadagno@hotmail.com",
"Phones": [
{
"phonenumber": "8675309",
"extension": ""
}
]
}
If the HTTP Request file has focus in the IDE, you can click on Run all requests in file, or the ‘play’ icon next to the verb. If you are going to execute the requests multiple times, as I do when building APIs, you should create a Run Configuration for them. Just be sure that your API is up and running before you make the HTTP request against it. You can create a Compound Run Configuration to start your API project and then run the HTTP requests. Just note, at the time I wrote this post, only Run is supported and not Debug.
Environment files allow you to define variables that are specific to your environment in your project. Think of it like the appSettings.json
in .NET but all in one file.
An example http-client.env.json
file could look like this.
1
2
3
4
5
6
7
8
9
10
11
{
"dev": {
"urlRoot": "https://localhost:5001/"
},
"uat":{
"urlRoot": "https://uat.mydomain.com/"
},
"prod":{
"urlRoot": "https://mydomain.com/"
}
}
Each root object property becomes an ‘environment’ that you can select to run your HTTP requests in. In this example, you can select one of the three environments, dev, uat, or prod, when you run these request. Now the variable urlRoot
, can used now in all your requests and will be replaced with the value for the environment selected.
In the HTTP request file, change this
1
GET https://localhost:5001/contacts/37/phones
to this
1
GET /contacts/37/phones
and if you select uat, the HTTP request that gets run is.
1
GET https://uat.mydomain.com/contacts/37/phones
This allows you to control the url, and other variables without having to edit the file. Win!
Rider supports two types of environment files, regular and private.
Choosing a Regular file will create the http-client.env.json
file. This file can contain common variables such as host name, port, or query parameters, and is meant to be distributed together with your project.
Choosing Private will create the http-client.private.env.json
file. This file might include passwords, tokens, certificates, and other sensitive information. It is added to the list of source code system ignored files by default. NOTE: The values of variables that are specified in the http-client.private.env.json
file override the values in the regular environment file.
There is a lot more to environment files and variables. You can read more here.
You can find more samples in the Contacts-Sample-Requests.http sample HTTP request file that I used for the Contacts sample application and API.
This was a quick introduction to using the HTTP Client in JetBrains Rider to help testing an API or web service. It’s helped me a lot. Hopefully it will be equally as helpful to you!
Windows 11 provides a calm and creative space where you can pursue your passions through a fresh experience. From a rejuvenated Start menu to new ways to connect to your favorite people, news, games, and content—Windows 11 is the place to think, express, and create in a natural way.
I was curious as to what the new Windows 11 was going to be like. So I wanted to get it installed.
At the writing of this posts, Windows 11 is only available to Windows Insiders in the Dev channel.
I’m a Windows insider but in the Beta. So on Thursday, I switched to the Dev channel and Windows 11 almost immediately started downloading after I verified the laptop I was planning to install Windows 11 on meet the minimum hardware requirements.
The download completed in under 30 minutes. Once I clicked restart, after about 10 minutes and three or four reboots I was presented with a shiny new Windows 11.
Here is what I noticed in the first few hours of using Windows 11.
NOTE: Last updated on July 5th, 2021 with the Update/Shutdown estimates.
From what I can tell a lot of work went into creating a new, modern user interface. The font selection is different, more crisp. There icons have more color. Windows now have curved corners instead of the rectangular corners.
Now when the system has updates to apply and you chose the power icon, you get prompted with estimates as to how long Windows should take to apply the updates and restart or shutdown.
Let’s just hope it never says ‘5 seconds remaining’ :smile:
The Start menu is one of the biggest and most notable changes. It’s been redone, again :smile:. There are no more Live Tiles but static icons and text.
All of the pinned applications are on the top. This is a scrolling list as indicated by the two dots on the right hand side. The lower half of the Start menu are your recommended applications and files. In the image above I have the Get Started, a few PowerPoint decks, a Word document, and an Excel spreadsheet. You can click on the All apps > button on the top to get an alphabetical list of applications installed. Clicking on the More > will take you to a list of your recent files.
The task bar is probably the first notable change you will see.
If you are a MacOS user, you will notice some similarity. The new task bar is centered on your desktop. It may be possible to move it but I haven’t checked yet. There are dashes or underlines underneath the icons to let you know which ones are open and which is the window with focus. In the image above, I have Microsoft Edge, Slack, and Windows File Explorer open. Windows File Explorer was the active window at the time of the snapshot.
There are also some new buttons added to the left of the task bar.
The first icon is the Windows Start Menu, highlighted above.
The second icon is the new Windows Search.
Not much different with this search page from previous versions.
The third button is the task view. Which if you hover over it brings up a smaller version of what is running on your desktops.
If you click on the Task View button, or four finger swipe upwards, you’ll get a new task view. So far, I am not a fan, it currently added and extra title bar to the windows. This might be to enable the touch experience with tablets.
The fourth button is the new Widget component. More on that later.
The next round of noticeable changes were in the task tray. The task tray is that area, typically in the lower right hand corner of the screen where the date and time are displayed.
The notification window had some big changes. This is brought up if you click on the network/sound icons.
Here you will the first set of changes which are more touch friendly. There is more spacing around the buttons and sliders. In addition, there is easy access to the configuration of the notifications and settings.
The Windows File Explorer received some new icons and better spacing between objects.
You might notice the Linux folder item. This appears because I have Docker on this machine along with Windows Subsystem for Linux (WSL)
The folder icons received some updates also.
These are the new(ish) items I discovered in the Windows 11
What used to the be the Clipboard Viewer (Accessible via the Windows Key + V) not only views the clipboard but allows you to paste, emoticons, GIFs, special characters, and more. It’s almost like its an extra input manager.
This is the biggest, net new, functionality that I’ve seen. Widgets seems like the first step in supporting running Android applications and mimicking some of its functionality.
In this very long screenshot you will see there are a lot of Widgets on by default, I will be disabling most of them :smile:
You can add/remove widgets by clicking the Add Widget button. Here are the options that we presented at the time of writing this post.
Having tried it yet, maybe an future update to this post will have something. According to the Windows 11 site.
With Windows 11, we’re excited to introduce Chat from Microsoft Teams integrated in the taskbar. Now you can instantly connect through text, chat, voice or video with all of your personal contacts, anywhere, no matter the platform or device they’re on, across Windows, Android or iOS. If the person you’re connecting to on the other end hasn’t downloaded the Teams app, you can still connect with them via two-way SMS.
While this feature is not yet available, the plan is to all you to be able to discover Android application through the Windows store and download them through the Amazon App Store. And yes, they will run on the Windows PC. This is probably why we see more of the Linux/WSL integration.
I’ll update this post as I experiment more with Windows 11 over the next few days.
Please remember that this is an early access version, some of the features maybe removed or changed.
More on Windows 11
]]>For this post, I will assume that we are working on an ASP.NET MVC (.NET Framework) that is a single project solution, which means that the data access, business logic, models, etc., are all in one solution. Similar to this.
This single project is an ASP.NET MVC application written with .NET Framework 4.5.2-4.8. There is a SQL Server data dependency where the database is present in the App_Data folder. The data access is handled through EntityFramework. You can find a completed project repository as well as the database setup instructions on GitHub.
Microsoft has made it easy to build a self-contained application and combine the user interface with the database and any business logic you need. However, with the application tightly coupled, this style makes it challenging to migrate or upgrade or even test your application. Our approach will be to break up the application into different layers or responsibilities, like the user interface, data layer/repository, and business/service layer.
While there is the .NET Upgrade Assistant to help you, its still in preview and only does some of the leg work for you. Dave Brock put together a nice post on working with it. I’ll walk you through some of the steps to redesign your application to make it a bit easier for this update and any more updates. Hopefully, that does not happen. :smile
Putting your domain or data transfer objects into a separate project is the first step in the migration. Having your domain objects, like Customer, Order, etc., in a separate library allows you to start breaking your application into layers. This domain layer with all of the models that describe your data/objects will be used throughout the new solution to communicate data between the layers.
Assuming you are using Entity Framework to access your database along with the code-based model development and not the EDMX-based.
If your application uses the EDMX-based approach, follow the Porting an EF6 EDMX-Based Model to EF Core guide to update to code-based model approach. Going forward with EntityFramework Core, the EDMX-based models are not being used.
The first thing you’ll want to do is create a new class library targeting .NET Standard. Why .NET Standard and not just .NET? Having the shared libraries like the Domain or Data libraries in .NET Standard allows you greater portability between projects and platforms. This approach will also allow you to slowly migrate pieces of the main project while keeping it up.
Now move those model classes over to the new project. I would name it something like Contacts.Domain
. I typically put all of the models in a Models
folder.
You’ll want to add a reference to the new Contacts.Domain
library to the existing Contacts application. Don’t forget to update the using statements!
Note: When moving classes/files in between folders, namespaces, or projects, use the Move Instance Method refactoring (Visual Studio or JetBrains Rider/Resharper)
Now let’s work on getting data access methods out of the user interface (web app). First, we’ll want to create a new class library targeting .NET Standard and add a reference to EntityFrameworkCore. The next part can be challenging, depending on how you have your application set up.
I am assuming that most of the data access for your application in the controller methods looks like this.
1
2
3
4
5
6
7
public ActionResults Index() {
var _db = new Contact.ContactsContext();
var contacts = _db.Contacts.ToList();
return View(contacts);
}
or
1
2
3
4
5
6
7
8
9
public ActionResults Index() {
using (var _db = new Contact.ContactsContext())
{
var contacts = _db.Contacts.ToList();
return View(contacts);
}
}
Now how you build up the data layer is up to you. I typically follow the manager or repository pattern. There are a lot of design patterns that you can follow. The choice is yours and not the intent of this blog post. The goal is to have one or more classes responsible for handling the saving, updating, deleting, and querying the data for the user interface.
Create the EntityFramework database context
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
namespace Contacts.Data
{
public class ContactContext : DbContext
{
private readonly IConfiguration _configuration;
public ContactContext(IConfiguration configuration)
{
_configuration = configuration;
}
public DbSet<Contact> Contacts { get; set; }
public DbSet<Address> Addresses { get; set; }
public DbSet<Phone> Phones { get; set; }
public DbSet<AddressType> AddressTypes { get; set; }
public DbSet<PhoneType> PhoneTypes { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder options)
=> options.UseSqlServer(_configuration.GetConnectionString("ContactsDatabaseSqlServer"));
}
}
Here is a sample of what the ContactRepository
class could look like.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
namespace Contacts.Data
{
public class ContactRepository
{
public Domain.Contact GetContact(int contactId)
{
using (var _db = new Contacts.Data.ContactContext())
{
return _db.Contacts.Where(c => c.ContactId == contactId);
}
}
/// rest of the class removed for brevity
}
}
Once you moved all of the data access from the previous user interface to the new data project, you should be able to replace your database calls with Data.method name, like Contacts.Data.GetContact(contactId)
using the above sample.
This approach may seem a bit risky or scary since you keep replacing portions of your application. I’d be lying if I said it wasn’t risky and scary. The truth is, it is risky and scary. However, you can mitigate some of the risks and make it easier to make changes in the future. Have I piqued your interest yet? That is where unit tests come in. But before we can build our unit tests, we will need to do some work on our solution to enable the mocking of our data repository classes. No, not mock them, but mock them :smile:. Mocking complements unit testing frameworks by isolating dependencies through creating replacement objects. In our example, we will be mocking or “faking” our database calls.
To mock our repository, we will need to create an interface for the repository so most mocking frameworks can build the objects for it.
Note: If you are using a commercial testing/mocking framework like Telerik JustMock, you do not need to create the interface. It just works. They even have support for mocking EntityFramework classes.
Creating the Interface for the newly created Data library can be done in two ways, manually or automatically. I recommend the automatic way which involves selecting the class name, clicking and choosing ‘Refactor’ | ‘Extract Interface’. Be sure to put the interfaces in the same class library as the models.
The interface will look something like this.
1
2
3
4
5
6
7
8
namespace Contacts.Domain.Interfaces
{
public interface IContactRepository
{
Contact GetContact(int contactId);
/// other methods removed for brevity
}
}
I do not intend this section to be a thorough walk-through of unit tests. I will not cover every possible scenario that you should or should not cover. The amount of unit test and the complexity of them is more of an art than a science. When building unit tests, I try to cover the happy path, the exception path, and the unhappy path. Does it work like it’s supposed to? Do I handle known and common exceptions? Do I handle none/common bad data entry? But again, your mileage may vary.
Here is a sample of the GetContact
unit tests
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[Fact]
public void GetContact_WithAnInvalidId_ShouldReturnNull()
{
// Arrange
var mockContactRepository = new Mock<IContactRepository>();
mockContactRepository.Setup(contactRepository =>
contactRepository.GetContact(It.IsInRange(int.MinValue, 0, Range.Inclusive))
).Returns<Contact>(null);
var contactManager = new ContactManager(mockContactRepository.Object);
// Act
var contact = contactManager.GetContact(-1); // Any number less than zero
// Assert
Assert.Null(contact);
}
[Fact]
public void GetContact_WithAValidId_ShouldReturnContact()
{
// Arrange
var mockContactRepository = new Mock<IContactRepository>();
mockContactRepository.Setup(contactRepository =>
contactRepository.GetContact(It.IsInRange(1, int.MaxValue, Range.Inclusive))
).Returns((int contactId) => new Contact
{
ContactId = contactId
});
var contactManager = new ContactManager(mockContactRepository.Object);
const int requestedContactId = 1;
// Act
// Assumes that a contact record exists with the ContactId of 1
var contact = contactManager.GetContact(requestedContactId);
// Assert
Assert.NotNull(contact);
Assert.Equal(requestedContactId, contact.ContactId);
}
Yes, I said it, create a new Web Application. However, it’s not going to be as hard as it may seem. We will create the new project using the template so that most of the new “plumbing code” gets creating for us. I’ll walk through the parts that are different. Since we are assuming your application was written using ASP.NET MVC, be sure to create a new Project and chose ASP.NET Core Web Application along with the “Model View Controller” type.
Tip, while you are creating a new Web Application, you can use the application templates that are part of the Telerik UI for ASP Core Component Suite of components and controls to make your development a lot easier and faster.
Let’s look at the folder structure and new files.
The first couple of folders for this sample are the same: Dependencies, Properties, Models, Services, and Views. I’ve copied the models, views, and services from my previous project. You’ll notice that one folder is missing Content. That’s because the files in Content, more so the static files, have been moved to the new wwwroot folder. Here you find folders for css, js, lib, and favicon.ico. The idea is stuff that doesn’t change and is not part of the ASP.NET generated pages or logic gets placed in the wwwroot folder. The content in the wwwroot folder is served up with respect to the root of the application. So if my application was https://www.josephguadagno.net, anything in the wwwroot
would be served from https://www.josephguadagno.net. The favicon.ico
would be served at https://www.josephguadagno.net/favicon.ico. So you can move your images in this folder. Just remember if you move your images to create some rewriting rules or mirror the path you originally had them in.
Some files are gone, and some files are new. Missing are the web.*.config, package.config, and global.asax. The web.*.config was replaced by the appSetting.json, more on that later. The package.config was moved to “inside” the csproj file. The global.asax was mostly replaced by the Startup.cs file. There are some new files also: appsettings.*.json, Program.cs, and Startup.cs.
Goodbye web.config! It was fun, but you were messy and hard to deal with at times. Hello appsettings.json. The appsettings.json is the application configuration model for .NET and ASP.NET Core.
A “typical” starter application configuration would look something like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"ConnectionStrings": {
"ContactsDatabaseSqlServer": ""
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*"
}
Here we are defining the connection string of ContactsDatabaseSqlServer
in the ConnectionStrings
object and defining the logging for the application.
You’ll notice that, by default, there is an appsettings.json and an appsettings.Development.json. ASP.NET Core supports configuration by environment. There is no longer a need to have to deal with the web.config transformations. In the appsettings.Development.json file, just add whatever setting you want to override for development. In this sample, I would want to update my database connection in development. The appsettings.Development.json would look like this.
1
2
3
4
5
{
"ConnectionStrings": {
"ContactsDatabaseSqlServer": ""
}
}
For more on the configuration in ASP.NET Core on the documentation page.
Program.cs functions just like it does for a console. It serves as the entry point for your application. For the most part you are starting up the web host
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
namespace Contacts.WebUi
{
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
}
}
The Startup.cs is where you configure your site. The methods in the Startup
class inform the hosting engine what services you are using. ASP.NET Core has an opt-in model, meaning you tell it what you. In previous versions of ASP.NET, the framework gave you everything. There are two methods in the Startup
class; Configure
and ConfigureServices
.
The Configure
method is used to configure the http pipeline. A sample method looks like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
});
}
In this definition, you’ll see that we are looking to see what environment we are running an displaying the appropriate error (lines 3-12). We then opt in to redirecting all requests to use https (line 13), allow the host to serve static files (line 14), use the default routing (line 16), use authorization (line 18), and finally use endpoints for MVC. As you can see, we explicitly tell ASP.NET Core and the host how it should work instead of it making assumptions.
ConfigureServices
is used to let ASP.NET know what services you are planning on using. The minimum for an ASP.NET Core MVC application would have services.AddControllersWithView()
. You could also register your application dependencies, logging, database context, and more.
Here are a couple of things that stumped me once or twice migrating from ASP.NET Framework to ASP.NET Core. Hopefully, you don’t see them but if you do, try this!
The System.ComponentModel.DataAnnotation
library is crucial in Entity Framework. This namespace used to be in the assembly/package for System.ComponentModel
. At one point in the evolution of .NET Framework, at least in version 4.7.2, System.ComponentModel.DataAnnotation
was moved into it’s own assembly/package. This change will only affect you if you migrate to ASP.NET Core MVC by keeping the .NET Framework ASP.NET MVC site going and working with .NET Standard, as I spoke to earlier.
Some web.config files have the targetFramework
set in them in addition to the csproj file. Look for the system.web
node in the configuration in the web.config, ensure the compilation
, and httpRuntime
nodes have the same targetFramework
as your csproj.
1
2
3
4
<system.web>
<compilation debug="true" targetFramework="4.7.2" />
<httpRuntime targetFramework="4.7.2" />
</system.web>
1
<TargetFrameworkVersion>v4.7.2</TargetFrameworkVersion>
This gotcha is applicable if you will use .NET Standard to help with the migration and continue to work with the .NET Framework MVC application.
Once you first run an ASP.NET MVC framework application with a library reference to a library written in .NET standard, like the Contacts.Model project
, you may see one or more errors. In Chrome or Microsoft Edge, you may get an “Unlimited” or “Too Many” redirects error message. This error will happen if you have custom error messages in your application.
1
2
3
<customErrors mode="On" defaultRedirect="ErrorPage.aspx?handler=customErrors">
<error statusCode="404" redirect="ErrorPage.aspx?handler=customErrors" />
</customErrors>
Turn the custom errors off by changing the mode
attribute to Off
. If you refresh the browser, you will see a message saying, “System.Object is not found”. It’s a weird message because of System.Object
is part of the both ASP.NET Core and ASP.NET. However, the error results from use referencing a .NET Standard project and not having a reference to .NET Standard in the .NET Framework application. After you add the reference, rerun the solution. It will still fail. Another weird one, the reason for this failure is IIS does not know how to load that assembly. So let’s tell it how to load it. Look for the compilation\assemblies
node in your web.config and add the assembly.
1
<add assembly="netstandard, Version=2.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51"/>
The node may look this after adding the assembly.
1
2
3
4
5
<compilation debug="true" targetFramework="4.7.2">
<assemblies>
<add assembly="netstandard, Version=2.0.0.0, Culture=neutral,PublicKeyToken=cc7b13ffcd2ddd51"/>
</assemblies>
</compilation>
Note: You may have other assemblies in this node depending on your application.
Now you should be execute and view the application. Remember to turn your custom errors back on.
ASP.NET Core does not have the concept of an App_Data folder used in earlier versions of ASP.NET. App_Data has commonly used the identity database or dynamic app configuration. While you probably should store databases or database files on the webserver, it’s a common practice on development machines to have the application-specific databases in the App_Data folder. Although ASP.NET Core does not support this out of the box, you can do it with a bit of code.
The code for this workaround should go in the Startup.cs class.
First, you create a token or string in the appsetting.development.json file that we will replace with the folder the application is running in. Here, you’ll see, I added the %CONTENTROOTPATH%
token as part of the AttachDbFilename
property. Note: The name of the token can be anything you want.
1
2
3
4
5
6
7
{
"ConnectionStrings": {
"ContactsSqlServer": "Data Source=(LocalDB)\\MSSQLLocalDB;
AttachDbFilename=%CONTENTROOTPATH%\\App_Data\\contacts.mdf;
Integrated Security=True"
}
}
Next, in the Startup.cs file, you need to create a variable to hold the path to the content.
1
private string _contentPath = "";
Next, you’ll need to update the constructor of the Startup
class to have ASP.NET Core inject the configuration and web host environment.
1
2
3
4
5
6
private string _contentRootPath = "";
public Startup(IConfiguration configuration, IWebHostEnvironment env)
{
Configuration = configuration;
_contentRootPath = env.ContentRootPath;
}
Then in the ConfigureServices
, before you need to use the App_Data folder.
1
2
3
4
5
string connectionString = Configuration.GetConnectionString("ContactsSqlServer");
if (connectionString.Contains("%CONTENTROOTPATH%"))
{
connectionString = connectionString.Replace("%CONTENTROOTPATH%", _contentRootPath);
}
When you add the Db Context in ConfigureServices
, replace the code with.
1
services.AddDbContext<Data.ContactsContext>(options => { options.UseSqlServer(connectionString);});
Now copy App_Data folder from your previous project to the new one.
If you used identity management in ASP.NET MVC Framework, you need to update a couple of things, primarily if you used Entity Framework to assist.
In ASP.NET MVC, authentication and identity features are configured using ASP.NET Identity in Startup.Auth.cs and IdentityConfig.cs, located in the App_Start folder. In ASP.NET Core MVC, these features are configured in Startup.cs.
Install the following NuGet packages:
Then you’ll need to configure identity in the Startup.ConfigureServices
method of Startup.cs. Something like this
1
2
3
4
5
6
7
8
9
10
11
12
public void ConfigureServices(IServiceCollection services)
{
// Add EF services to the services container.
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();
services.AddMvc();
}
You can read more on it at Migrate Authentication and Identity to ASP.NET Core or ASP.NET Core Identity 3.0 : Modifying the Identity Database
That’s it! That’s a lot to take. While I can’t cover every possible scenario that you might hit, hopefully, you have enough to get you started and handle some of the surprises that I ran into while migrating applications.
]]>I like how this book breaks down a “real world” application into pieces (chapters). The author starts by introducing the languages, technologies, and tools. After the introduction, you begin to build an application. The author presents how to use ASP.NET Core as the “back-end,” then transitions to using Angular for the “front-end.” From there on out, the application is built capability by capability. The author covers building out the database, including the server and building out the API to retrieve data from the database, and then consuming that data via the Angular client. The author then discusses editing, updating, paging, sorting, and filtering the data—Lots of capabilities that we find in everyday applications.
For the following few chapters, the author makes improvements to the code base to make it more maintainable and testable. Then the author talks about authentication and authorization. The book is wrapped up by showing how to deploy your application to Windows, Linux, or Azure App Service.
Overall, it is an excellent introduction to ASP.NET 5 Core and Angular. As a bonus, in every chapter, the author has a recommended reading section and reference section for more details.
This book would be a great addition to a web developer who wants to get into ASP.NET and Angular.
Get ASP.NET Core 5 and Angular on Amazon.
]]>In this post, I’ll walk you through how you can implement the same functionality using HTML, JavaScript, Azure Map Search service, and Telerik Kendo UI Autocomplete control. You’ll be able to download the completed code at the end of the post.
If you want to watch me do this “live” instead, check out the video.
This post was written for the following software and versions listed below.
The post assumes you already have an Azure Maps account with a corresponding Primary Key and/or Client Id. If you don’t have a key, you can obtain one here. In addition to an Azure Maps account, you need to have a licensed copy of the Telerik Kendo UI suite.
If you are ready, open up your IDE of choice, Visual Studio, Visual Studio Code, JetBrains Rider, or just plan Notepad/TextEdit to get started.
Start with the HTML file. Create a file and name it autocomplete.html.
Create the standard <HTML>
, <HEAD>
, and <BODY>
tags.
In the <HEAD>
section, we are going to need to register the stylesheets and Javascript files for both the Kendo UI libraries and the Azure Maps.
1
2
3
4
5
6
7
<link rel="stylesheet" href="https://kendo.cdn.telerik.com/2021.1.224/styles/kendo.common.min.css" />
<link rel="stylesheet" href="https://kendo.cdn.telerik.com/2021.1.224/styles/kendo.office365.min.css" />
<link rel="stylesheet" href="https://kendo.cdn.telerik.com/2021.1.224/styles/kendo.office365.mobile.min.css" />
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
<script src="https://kendo.cdn.telerik.com/2021.1.224/js/jquery.min.js"></script>
<script src="https://kendo.cdn.telerik.com/2021.1.224/js/kendo.all.min.js"></script>
<script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
NOTE: The version number for the Kendo UI library at the time of this post was 2021.1.224, you may have a different version, but that should be fine.
You’ll then need to add the references to our script files.
1
2
<script src="azurekey.js"></script>
<script src="autocomplete.js"></script>
Now in the <BODY>
tag, add our components, a <input>
and a <div>
1
2
3
<input type="text" id="queryText" style="width: 100%;" />
<p></p>
<div id="mapControl" style="position: relative;width:100%;min-width:290px;height:500px;"></div>
Feel free to style this however you want; I did it this way to help accent stuff in the UI. The critical part of the HTML is having an <input>
with the name of querytext
and a div
with the id
of mapcontrol
.
The completed HTML file should look something like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<html>
<head>
<title>Autocomplete Demo</title>
<link rel="stylesheet" href="https://kendo.cdn.telerik.com/2021.1.224/styles/kendo.common.min.css" />
<link rel="stylesheet" href="https://kendo.cdn.telerik.com/2021.1.224/styles/kendo.office365.min.css" />
<link rel="stylesheet" href="https://kendo.cdn.telerik.com/2021.1.224/styles/kendo.office365.mobile.min.css" />
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
<script src="https://kendo.cdn.telerik.com/2021.1.224/js/jquery.min.js"></script>
<script src="https://kendo.cdn.telerik.com/2021.1.224/js/kendo.all.min.js"></script>
<script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
<script src="azurekey.js"></script>
<script src="autocomplete.js"></script>
</head>
<body>
<input type="text" id="queryText" style="width: 100%;" />
<p></p>
<div id="mapControl" style="position: relative;width:100%;min-width:290px;height:500px;"></div>
</body>
</html>
Now let’s focus on the Javascript file. You’ll need to create two files, azurekey.js
and autocomplete.js
. You don’t need to create an azurekey.js
if you don’t want to, but since this has a key, I exclude it from source control.
This file has one line in it which sets up a variable named azurekey
used throughout autocomplete.js
for our Azure Maps integration. The contents of the file should be.
1
const azureKey = "replace me";
Replace the replace me
with your Azure Maps Client Id or Primary Key.
We are going to have a few variables and two functions in the files. Let’s start with some of the variables.
1
2
3
4
5
let map;
let azureSearchDataSource;
let azureMapDataSource;
const mapCenter = [-73.985130, 40.758896]
const defaultZoom = 15;
Variable Name | Type | Purpose |
---|---|---|
map |
The Azure Map map | Display the map control |
azureSearchDataSource |
Kendo UI DataSource | Used to call the Azure Map Search service from the Kendo UI Autocomplete control |
azureMapDataSource |
Azure Maps DataSource | Used to draw the pushpins on the layers of the map |
mapCenter |
Array of numbers | Used to center the map and provide hints of where to search. The first number is the longitude and the second is the latitude. In this example, -73.985130, 40.758896 is Time Square, Manhattan, NY |
defaultZoom |
A number or string | Used to tell the map control at what level do we want the map control to zoom in |
Once the variables are there, we create a function, initializeMap
, which looks like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function initializeMap() {
map = new atlas.Map('mapControl', {
center: mapCenter,
zoom: defaultZoom,
authOptions: {
authType: "subscriptionKey",
subscriptionKey: azureKey
}
});
map.events.add('ready', function () {
azureMapDataSource = new atlas.source.DataSource();
map.sources.add(azureMapDataSource);
map.layers.add(new atlas.layer.SymbolLayer(azureMapDataSource));
azureMapDataSource.add(new atlas.data.Point(mapCenter));
})
}
In lines 1-9, we initialize the map control.
Line 2 is the name of the div
you want the map to be rendered in. You’ll notice that this name matches the div
we created on the HTML page.
Line 3 and 4 use the variables we created in the previous step to center the map and set the zoom level.
Lines 5 - 7, we create the map authentication. For more details on the map’s customization, check out the Azure Map documentation on creating a map.
In lines 11 - 16, we attach a ready
event to map control, which instructs the Azure Maps controls to execute the code when the map is ready, meaning displayed.
Lines 12 - 15, we add a data source to the map, which has a Symbol Layer in it. This is done so we can draw a pushpin on the center of the map.
We’re almost there!
The next step is to initialize the map on the page when the document is ready. To do that, let’s attach to the ready event with jquery.
1
2
3
$(() => {
initializeMap();
});
At this point, if you save the 3 pages and open up autocomplete.html
in a browser, you should see something like this.
Before we assemble the components to enable the address and point of interest suggestions, let’s take a quick look at the Azure Maps Search service API.
The calls to the API are done via an HTTP GET with query parameters. The breakdown of the parameters is as follows.
Name | Value | Comments |
---|---|---|
Domain | https://atlas.microsoft.com |
|
Endpoint | /search/poi/ |
The search for Points of Interests endpoint |
Return Type | json |
Desired format of the response. Value can be either json or xml. |
Parameter Name | Example Value | Comments |
typeahead | true |
Boolean. If the typeahead flag is set, the query will be interpreted as a partial input and the search will enter predictive mode |
api-version | 1 |
Version number of Azure Maps API. Current version is 1.0 |
view | Auto |
The View parameter specifies which set of geopolitically disputed content is returned via Azure Maps services, including borders and labels displayed on the map. |
language | en-US |
Language in which search results should be returned. |
countrySet | US |
Comma separated string of country codes |
subscription-key | your subscription key | |
lon | -73.98513 |
The longitude of the center of the search area |
lat | 40.758896 |
The latitude of the center of the search area |
query | macy |
The address/place you are searching for |
Sample search query
https://atlas.microsoft.com/search/poi/json?typeahead=true&api-version=1&view=Auto&language=en-US&countrySet=US&subscription-key=replace-me&lon=-73.985130&lat=40.758896&query=macy
For more details on the search parameters available, please look at the Get Search POI documentation.
Assuming the request is correct, the service responds with JSON data. The response has two properties summary
and results
. The summary
section is just that, a summary of the requests and the results defined as a SearchPoiSummary. The results
is an array of SearchPoiResult items.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
{
"summary": {
"query": "macy",
"queryType": "NON_NEAR",
"queryTime": 51,
"numResults": 10,
"offset": 0,
"totalResults": 960,
"fuzzyLevel": 1,
"geoBias": {
"lat": 40.758896,
"lon": -73.98513
}
},
"results": [
{
"type": "POI",
"id": "g6JpZK84NDAzNjkwMTA3NzI0NjOhY6NVU0GhdqdVbmlmaWVk",
"score": 2.5242369175,
"dist": 973.6298458107448,
"info": "search:ta:840369010772463-US",
"poi": {
"name": "Macy's Department Store",
"phone": "+1 212-695-4400",
"categorySet": [
{
"id": 7376
}
],
"url": "www.macys.com",
"categories": [
"important tourist attraction"
],
"classifications": [
{
"code": "IMPORTANT_TOURIST_ATTRACTION",
"names": [
{
"nameLocale": "en-US",
"name": "important tourist attraction"
}
]
}
]
},
"address": {
"streetNumber": "151",
"streetName": "W 34Th St",
"municipalitySubdivision": "Manhattan",
"municipality": "New York",
"countrySecondarySubdivision": "New York",
"countrySubdivision": "NY",
"countrySubdivisionName": "New York",
"postalCode": "10001",
"extendedPostalCode": "10001-2101",
"countryCode": "US",
"country": "United States",
"countryCodeISO3": "USA",
"freeformAddress": "151 W 34Th St, New York, NY 10001",
"localName": "New York"
},
"position": {
"lat": 40.75042,
"lon": -73.98803
},
"viewport": {
"topLeftPoint": {
"lat": 40.75231,
"lon": -73.99052
},
"btmRightPoint": {
"lat": 40.74853,
"lon": -73.98554
}
},
"entryPoints": [
{
"type": "main",
"position": {
"lat": 40.75092,
"lon": -73.99043
}
},
{
"type": "main",
"position": {
"lat": 40.75046,
"lon": -73.98934
}
}
]
}
]
}
While there is a lot of data that we can use, we will only use the address.freeformAddress
, poi
, poi.name
, and position
properties.
Now let’s turn the querytext
input
control into a jquery Kendo UI Autocomplete widget. Let’s go back to the autocomplete.js
file and before the initializeMap();
statement and after the $(() ==> {
statement place the following code.
1
2
3
4
5
6
$('#queryText').kendoAutoComplete({
minLength: 3,
placeholder: "Select a venue",
dataValueField: "id",
dataTextField: "poi.name"
});
This turns the input
control into an Autocomplete widget.
Line 2 tells the widget to start the lookup only when there is a minimum of 3 characters entered.
Line 3 is the text that gets displayed when there is no input to prompt the user what you are looking for.
Line 4 is the field to use as the value for selected. The value is helpful for lookups and storage later.
Line 5 is the text field to display in the control once an item is selected.
For additional configuration items for the Autocomplete widget, check out the documentation.
Now that widget is created, we need to create a data source to connect the Autocomplete widget with the Azure Maps Search service. This is where the shared utility of the DataSource comes in. I’m not going to go into great detail of the utility because their documentation is great.
Let’s go and create the DataSource. Just before the $('#queryText').kendoAutoComplete({
code we are going to create and config the DataSource utility.
1
azureSearchDataSource = new kendo.data.DataSource({});
This initializes the DataSource utility. Now, let’s config it.
In between the curly braces {}
, we are going to add the serverFiltering
property and set it to true
. This is very important. Setting the serverFiltering
to true
instructs the DataSource that it needs to get the data from the server by making another call any time the search (or input changes). Otherwise, the control will filter with a locally cache version of the data set. In this case, we don’t want to use a locally cache copy because the data will most likely not match.
Next, we have to set the transport
property. The transport is used to interact with the data source, in our case, the Azure Maps Search service. We are going to want to configure the read
property of the transport
object. Since the DataSource utility supports CRUD operations, the transport
object supports read, write, update, and delete.
Our transport section looks like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
transport: {
read: {
url: "https://atlas.microsoft.com/search/poi/json?typeahead=true&api-version=1&view=Auto&language=en-US&countrySet=US&subscription-key=" + azureKey,
type: "get",
dataType: "json",
data: function() {
var center = map.getCamera().center;
var searchTerm = $("#queryText").data("kendoAutoComplete").value();
return {
lon: center[0],
lat: center[1],
query: searchTerm
}
}
}
}
The url
property sets the URL for the read property. In our case, we populated most of the fields. Any dynamic fields are handled with data
property. The type
and dataType
properties do not need to be set; they are the default. It is set here for clarity. As I alluded to, the data
property returns a JSON object which the DataSource utility stringify and appends to the URL. In the code above, on line 7, we create a variable called center
, which asks the Azure Maps map control for the center of the map in case the user moved it. Line 8, we extract the value of the queryText
control to get the search term. Lines 9 - 12, we return an object that looks like this
1
2
3
4
5
{
"lon": -73.98516,
"lat": 40.758896,
"query": "macy"
}
This then gets stringified to &lon=-73.98516&lat=40.758896&query=macy
.
Since the Azure Map Search service does not return an array of results, we have to tell the DataSource what the schema of the data is. Let’s add a schema
property to the DataSource that looks like this.
1
2
3
4
5
6
7
8
9
schema: {
type: "json",
data: function(response) {
return response.results;
},
model: {
id: "id"
}
}
Line 2 tells the DataSource to expect json
data.
For line 3, we create a function for the data
property. This property identifies what the DataSource should consider the result. Since the Azure Maps Search service returns it as a property of the response, we return response.results
from the function.
In lines 6-8, we define the model. The model is essential if you are doing CRUD operations, and the structure of the data for editors is essential. For our use case, we just need to identify the id
to the id
of the result item.
Now, you can go back to the initialization of the AutoComplete widget and assign set the property dataSource
to azureSearchDataSource
.
Now that we know the data structure returned let’s display some additional data in the search suggestion. For this example, I would like to display the name of the suggestion, the poi.name
, and the address address.freeformAddress
. Fortunately, the Autocomplete widget, and many other widgets in the suite, provide a templating engine to help change the look. We are going to use the template property.
Explaining everything possible is well beyond this blog post, so I’ll cover some of our use. Somewhere in the body of the autocomplete.html
page, create a template like this.
1
2
3
4
5
6
7
<script id="autoCompleteItemTemplate" type="x-kendo-template">
# var suggestionLabel = address.freeformAddress; #
# if (poi && poi.name) { #
# suggestionLabel = poi.name + ' - (' + suggestionLabel + ')'; #
# } #
<span>#:suggestionLabel#</span>
</script>
In line 1, we set the id
of the script block to autoCompleteItemTemplate
. This allows us to reference it from the AutoComplete control. We also set the type
of the script to x-kendo-template
. This helps with Intellisense.
In lines 2-6, we run some logic to create display text. Line 2 creates a variable named suggestionLabel
and sets it to the address.freeformAddress
property. Intellisense does not help a lot here, so watch your spelling. On line 3 we check to see if the property poi
and poi.name
are present, if they are person, on line 4 we set the suggestionLabel
variable equal to poi.name
+ - ( + suggestionLabel
+ )
.
Line 6 writes out a span
element with the value of the suggestionLabel
variable.
Based on the sample results above, we would get
1
<span>Macy's Department Store - (151 W 34Th St, New York, NY 10001)</span>
They are written to the browser for this result item.
Now add the template
property to the queryText
initialization.
1
template: $('#autoCompleteItemTemplate').html()
Save the HTML and Javascript files.
If all is correct, when you refresh the page and type Macy
in the text box, you should see something like this.
Now that we have the search suggestions displayed let’s center the map and add a pushpin with the search suggestion location.
Add the select
function to the queryText
element
1
2
3
4
5
6
7
8
9
select: function (e) {
var item = e.dataItem;
console.log(item.poi.name);
map.setCamera({
center: [item.position.lon, item.position.lat],
zoom: defaultZoom});
azureMapDataSource.add(new atlas.data.Point([item.position.lon, item.position.lat]));
}
The Autocomplete widget passes an object to the function that is triggering the event. In the select
event, we need the dataItem
property, a copy of the result above.
Line 2, we set a variable named item
to the dataItem
of the element.
Lines 5-7, we re-center the map to the new coordinates. We use the position
property to center the camera.
Line 8 draws a push pin at the coordinates of the selected item.
Save the files, refresh the browser, type Macy
, and select “Macy’s Department Store”. If successful, the browser looks similar to this.
In about 100 lines of our code, we were able to use the Kendo UI Autocomplete control and the Azure Maps Search service to build an address/point of interest search component.
The completed code for this post, less the azurekey.js
file, can be found at https://github.com/jguadagno/kendo-autocomplete-azure-maps-search
You can watch one of the sessions of `Coding with JoeG’ where I demonstrated this.
Yarn is a package manager that doubles down as project manager. Whether you work on one-shot projects or large monorepos, as a hobbyist or an enterprise user, we’ve got you covered.
If you are familiar with what Nuget is for packages in the .NET ecosystem, Yarn does the same thing except for web packages (HTML, CSS, javascript, etc).
I looked at Yarn to update the commonly used web frameworks like, Bootstrap, jQuery (required for Bootstrap), and Fontawesome and I couldn’t find anything that told me how. Microsoft started a project called LibMan which helped with the management of packages but that only worked in the IDE. So, as I like to do, I worked on figuring it out.
Let’s get to it!
First, you need to down and install Yarn. Details can be found on their Getting Started page. The second step, if you already have an existing ASP.NET project, Core/MVC/Razor, delete the wwwroot\lib
folder, assuming you have nothing in it but the 3rd party packages.
By default, Yarn will place everything in the node_modules
folder which could work for you, for me, I was trying to have the smallest amount of changes to the rest of the project. There are two ways to override that folder. You can add --modules-folder wwwroot/lib
to the command line every time you run Yarn, or you can create a Yarn configuration file.
NOTE: The .yarnrc
file is going away, from what I can tell with v2 of Yarn.
To create the configuration file, you need to create a .yarnrc
file in the root of your web project. Depending on what IDE you need, the instructions will vary slightly. Once the file is created, add the following line to the .yarnrc
file. I found this on StackOverflow.
1
--modules-folder wwwroot/lib
Save the file.
I’m going to cover adding the required files for the templates that are included with ASP.NET. Here, you will need to go to a command-line, terminal, bash-script, whichever is your choice, to add the packages. The syntax is yarn add <packageName>
where the <packageName>
is the name of the package you want to add.
Here are the commands for the ASP.NET Core template.
1
2
3
4
5
6
yarn add popper.js (deprecation warning)
yarn add jquery
yarn add bootstrap
yarn add jquery-validation
yarn add jquery-validation-unobtrusive
yarn add @fortawesome/fontawesome-free
Note: You will see a warning that popper.js
is deprecated, you can ignore that, the next version of Bootstrap will have popper.js built-in so you will no longer need it.
If you have additional packages that you want to add, search the Yarn site to see what packages they have. If you find the package there the site includes other details about the package, including the primary site for the package, what CDNs host the package, and more.
You’ll notice that a package.json
and yarn.lock
file gets created. The package.json
is the list. or manifest, of packages you have added. The files can be edited directly if you know the name and versions of the packages you want to install. The yarn.lock
file is used by Yarn and contains some more details around the packages.
If you used the location of wwwroot/lib
you shouldn’t need to do anything else. When you want to update a library, just edit the package.json
file and run yarn
from the terminal.
Bonus Content! Win/Win!
What’s a content delivery network you ask?
A Content Delivery Network (CDN) is a system of geographically distributed servers that work together to provide fast delivery of Internet content. It’s designed to minimize latency in loading web pages by reducing the physical distance between the server and the user. A CDN allows for a quick transfer of assets needed to load content such as HTML pages, javascript files, stylesheets, images, and videos.
Using Yarn to add your HTML, CSS, or javascript dependencies provides you with the option to not add these files to your source code repositories since you can install them at any time. That choice is yours. However, using packages added by Yarn also make it easier for you to use CDNs for hosting these same files.
There are several CDNs out there to use. If you discover the package on the Yarn site, you’ll see mentions of a few CDNs like jsDeliver, unpkg, and bundle.run, I’ll use cdnjs for these examples.
Using a CDN has several advantages but at the same time has a disadvantage. If the CDN you chose is down, your content will not be served which will lead to a less than designed look and feel for your site. The good news is, ASP.NET has a built-in feature to fallback to local content in the event the CDN is down. These features are the Script & Link tag helpers.
Looking at the sample for jQuery you’ll notice four additional attributes.
1
2
3
4
5
6
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"
integrity="sha512-bLT0Qm9VnAYZDflyKcBaQ2gg0hSYNQrJ8RilYldYQ1FxQYoCLtUjuuRuZo+fjqhx/qtq/1itJ0C2ejDxltZVFg=="
crossorigin="anonymous"
asp-fallback-src="~/lib/jquery/dist/jquery.min.js"
asp-fallback-test="window.jQuery">
</script>
Name | Usage |
asp-fallback-src | The local source to use if the CDN failed to load the resource |
asp-fallback-test | A test that ASP.NET will inject into your page to see if loading from the CDN worked. |
You can replace all of the scripts now with a script
tag similar to this one. Later in the post, you’ll see a working example.
Even more bonus content! Win/Win Again!
ASP.NET Core added an Environment tag helper. This helper allows you to render content specific to an environment. This means, you can scripts from your local machine when running in development and serve them from a CDN when running in production. I know, at this point you are probably saying ‘Joe, that sounds really complicated’. Well, it isn’t! Let me show you.
In this example.
1
2
3
<environment include="Staging,Production">
<strong>IWebHostEnvironment.EnvironmentName is Staging or Production</strong>
</environment>
ASP.NET will render IWebHostEnvironment.EnvironmentName is Staging or Production.
You can use the exclude
and include
attributes together to create something like this.
1
2
3
4
5
6
7
8
9
10
11
12
<environment include="Development">
<link rel="stylesheet" href="~/lib/bootstrap/dist/css/bootstrap.min.css"/>
<link rel="stylesheet" href="~/css/site.css"/>
</environment>
<environment exclude="Development">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.5.3/css/bootstrap.min.css"
integrity="sha512-oc9+XSs1H243/FRN9Rw62Fn8EtxjEYWHXRvjS43YtueEewbS6ObfXcJNyohjHqVKFPoXXUxwc+q1K7Dee6vv9g=="
crossorigin="anonymous"
asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute"/>
<link rel="stylesheet" href="~/css/site.css" asp-append-version="true"/>
</environment>
In the Development
environment, ASP.NET Core will render the files from the local machine. In all other environments, it will try from a CDN first, then locally.
Here is a completed template using the Environment
tags, and CDNs.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<title>@ViewData["Title"] - @conferenceName</title>
<environment include="Development">
<link rel="stylesheet" href="~/lib/bootstrap/dist/css/bootstrap.min.css"/>
<link rel="stylesheet" href="~/css/site.css"/>
</environment>
<environment exclude="Development">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.5.3/css/bootstrap.min.css"
integrity="sha512-oc9+XSs1H243/FRN9Rw62Fn8EtxjEYWHXRvjS43YtueEewbS6ObfXcJNyohjHqVKFPoXXUxwc+q1K7Dee6vv9g=="
crossorigin="anonymous"
asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute"/>
<link rel="stylesheet" href="~/css/site.css" asp-append-version="true"/>
</environment>
<environment include="Development">
<script defer src="~/lib/@("@")fortawesome/fontawesome-free/js/all.js"></script>
</environment>
<environment exclude="Development">
<script src="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/js/all.min.js"
integrity="sha512-F5QTlBqZlvuBEs9LQPqc1iZv2UMxcVXezbHzomzS6Df4MZMClge/8+gXrKw2fl5ysdk4rWjR0vKS7NNkfymaBQ=="
crossorigin="anonymous">
</script>
</environment>
</head>
<body>
<header>
<nav class="navbar navbar-expand-sm navbar-toggleable-sm navbar-dark bg-dark border-bottom box-shadow mb-3">
<div class="container">
<a class="navbar-brand" asp-area="" asp-controller="Home" asp-action="Index">Home Page</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target=".navbar-collapse" aria-controls="navbarSupportedContent"
aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="navbar-collapse collapse d-sm-inline-flex justify-content-between">
<ul class="navbar-nav flex-grow-1">
<li class="nav-item">
<a class="nav-link" asp-area="" asp-controller="Home" asp-action="Index">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" asp-area="" asp-controller="Events" asp-action="About">About</a>
</li>
</ul>
</div>
</div>
</nav>
</header>
<div class="container">
<main role="main" class="pb-3">
@RenderBody()
</main>
</div>
<footer class="border-top footer text-muted">
<div class="container">
© 2021 - JosephGuadagno.NET, LLC - <a asp-area="" asp-controller="Home" asp-action="Privacy">Privacy</a>
</div>
</footer>
<environment include="Development">
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script src="~/lib/bootstrap/dist/js/bootstrap.bundle.min.js"></script>
</environment>
<environment exclude="Development">
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"
integrity="sha512-bLT0Qm9VnAYZDflyKcBaQ2gg0hSYNQrJ8RilYldYQ1FxQYoCLtUjuuRuZo+fjqhx/qtq/1itJ0C2ejDxltZVFg=="
crossorigin="anonymous"
asp-fallback-src="~/lib/jquery/dist/jquery.min.js"
asp-fallback-test="window.jQuery">
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.5.3/js/bootstrap.min.js"
integrity="sha512-8qmis31OQi6hIRgvkht0s6mCOittjMa9GMqtK9hes5iEQBQE/Ca6yGE5FsW36vyipGoWQswBj/QBm2JR086Rkw=="
crossorigin="anonymous"
asp-fallback-src="~/lib/bootstrap/dist/js/bootstrap.bundle.min.js"
asp-fallback-test="window.jQuery && window.jQuery.fn && window.jQuery.fn.modal">
</script>
</environment>
<script src="~/js/site.js" asp-append-version="true"></script>
@await RenderSectionAsync("Scripts", required: false)
</body>
</html>
I hope this helps you. In a future post, I’ll add how I can use Yarn in my build scripts so I don’t have to check in all of the package contents.
]]>On the first attempt, I extended my wireless network with a network repeater. I already had a NETGEAR Nighthawk X4S WiFi Router (R7800), so I added a NETGEAR WiFi Mesh Range Extender (EX8000) to offload some of the wireless clients too. This combination had ‘FastLane3’ technology (Dedicated WiFi Link to the router to avoid halving bandwidth of extended WiFi signals and innovative antenna design for ultimate coverage). This attempt worked for a bit except when I was on a video call, the wife was streaming, and the son was gaming. There were packet drops, and the latency was spiking. This did not work for us. So I looked into other solutions. I wanted to get an ethernet connection in my son’s office to reduce or eliminate the network latency and packet drops.
This led to attempt two to resolve the network latency and packet drops. I updated my router to ASUS ROG Rapture WiFi 6 Gaming Router (GT-AX11000) - Tri-Band 10 Gigabit Wireless Router This router has all kinds of benefits like three bands, so I could dedicate a separate band to my son’s office (his computer and Xbox). This router also can add nodes to the network to create a mesh network using its AIMesh. I tried just using this device by itself for a few days, and the problems did not go away. I then proceeded to add the NETGEAR EX8000 repeater to the network to extend the network, and after a few days, the network latency and packet drops started again.
Update September 30th, 2022
I decided to try out the AI Mesh feature of the ASUS ROG Rapture router by adding another AIMesh capable router to the mix. I picked up a ASUS AX1800 WiFi 6 Router (RT-AX1800S) – Dual Band Gigabit AX Wireless Internet Router router and added it to the network. Now I have additional bandwidth and network slots upstairs without drilling any holes or running any wires. I have been using this setup for a few months now, and I have not had any network latency or packet drops. I am very happy with this setup.
This led me to research other options. That is when I discovered MoCA. MoCA (stands for Multimedia over Coax Alliance) is a technology that uses the existing coaxial cables already in most people’s homes. In essence, MoCA creates a wired Internet home network without the headache of drilling holes or running wires. Because MoCA technology is wired, it also delivers a reliable, low-lag, and ultra-high-speed connection. All of these are critical to a good streaming video or online gaming experience.
After watching this video and checking out a few diagrams like this one, diagram 1, I decided to give this a shot since I have an existing unused Coax network in my home. I purchased 2 goCoax MoCA 2.5 Adapter for Ethernet Over Coax adapters to give it a shot.
Note You need at least 2 MoCA adapters for this work. Also, MoCA can support a maximum of 16 adapters on one network.
My setup is based on having the goCoax adapters. You will need at least 2 coax cables for this setup. If you are using another MoCA adapter that is not the goCoax one, you may need to purchase cable splitters. The goCoax adapters come with the splitter built-in.
The setup was easy and will vary depending on your home network and internet access. For me, I have the local cable company provide Internet only and through coax. I have the cable modem in my office, which is connected via coax.
Step 1: Disconnect the coax cable from the cable modem.
Step 2: Connect the cable connected to the cable modem to the MOCA port on the goCoax.
Step 3: Connect a new coax cable to the TV port of the goCoax adapter and connect the other end of the cable to your cable modem.
Step 4: Power the goCoax device.
Step 5: Connect an ethernet cable from the LAN port of the goCoax device.
At this point, the ‘network traffic’ is going to be sent over the coax network. If you want a hard-wired network connection in another location of the house, as long as there is a coax port nearby, you can install another goCoax adapter, and you should be good.
A visual of the setup.
After a week of using this set up, there have been no network drops, no latency issue, and virtually no pack loss.
In short, if your wireless network is struggling, you don’t have ethernet wiring through out your home, and you have coax wiring, MoCA adapters are a low-cost way to help.
If you install a MoCA setup, you should check to see if you have a ‘Point of Entry’ (POE) filter installed where your network connection comes into your home. They look something like this.
A POE prevents interference between subscriber homes that use MoCA technology. But more importantly, it prevents you from leaking your network data back to your provider or anyone/anything else sharing the cable wiring. You can get on Amazon for under $10.
Note If you click on an Amazon link and purchase a product, I may get a commission from Amazon. The purpose of the links is to avoid searching and not make money on the blog post.
]]>Streamlabs OBS is a free reliable streaming app with the fastest setup process on the market. We have developed an all-in-one application making streaming easy for everyone. Whether you’re a novice or experienced streamer, Streamlabs OBS will provide you the best streaming experience, with tools built to engage, grow, and monetize your channel.
The first thing, from what I remember, is that Streamlabs OBS runs an ‘Auto Optimize’ function, which you can find on the general tab of settings. From everything I have seen and read, you really never want to, except for maybe the first time. Most of the time, you will want to ‘tweak’ the settings to fit your needs. I’ve been on streams where the host says “What a minute. I have to tweak something”. It’s gonna happen for a while.
First, we are going to talk about the setting for Streamlabs OBS. You can get to these by clicking on gear () icon. Again, some settings might not be applicable depending on your hardware.
Everything in this tab is according to your preference. For me, everything is unchecked except for:
Enable multistream, if you are. For me, as the writing of this post, I’m only live streaming to Twitch
For this tab, I have a few things tweaked based on my hardware and conditions. Please select the Advanced
option for ‘Output mode’.
Setting | Value | Description |
Audio Track | 1 |
Only have one track. This is helpful if you want to separate music from your voice |
Encoder | Hardware (NVENC) |
This is set to software, by default, I changed it to hardware because I wanted to offload some of the video encoding to my video chip/card since my PC not fast enough |
Enforce stream service encoder settings | Checked |
|
Rate Control | CBR |
Constant Bitrate Stream: This is the standard bitrate to use for streamers. Let’s say you’ve set your video bitrate to 3000 Kbps for your next stream, using CBR means your stream will always be at 3000 Kbps, even when less could be used, such as a dark game that lacks detail. |
Bit Rate | 6000 |
The higher the better |
Keyframe Interval | 0 |
0 Indicates that Streamlabs will figure it out before it starts. The “Keyframe Interval” could be set at 2, meaning that the video frame will be rendered every 2 seconds. |
Preset | Performance |
|
Profile | high |
|
Look-ahead | Unchecked |
|
Psycho Visual Tuning | Checked |
|
GPU | 0 |
|
Max B-frames | 2 |
For a more in-depth look at bitrates, and keyframes, check out What is Video Bitrate.
Setting | Value | Description |
Type | Standard |
|
Recording Path | local storage | If you can, this should be a SSD on a different partition/device than your operating system. Avoid recording to a network drive |
Generate File Name without Space | Checked |
|
Recording Format | mp4 |
|
Audio Track | 1 |
Only have one track. This is helpful if you want to separate music from your voice |
Encoder | Hardware (NVENC) |
This is set to software, by default, I changed it to hardware because I wanted to offload some of the video encoding to my video chip/card since my PC has a detected video card now. |
Enforce stream service encoder settings | Checked |
|
Rate Control | CBR |
Constant Bitrate Stream: This is the standard bitrate to use for streamers. Let’s say you’ve set your video bitrate to 3000 Kbps for your next stream, using CBR means your stream will always be at 3000 Kbps, even when less could be used, such as a dark game that lacks detail. |
Bit Rate | 6000 |
The higher the better |
Keyframe Interval | 0 |
0 Indicates that Streamlabs will figure it out before it starts. The “Keyframe Interval” could be set at 2, meaning that the video frame will be rendered every 2 seconds. |
Preset | Performance |
|
Profile | high |
|
Look-ahead | Unchecked |
|
Psycho Visual Tuning | Checked |
|
GPU | 0 |
|
Max B-frames | 2 |
All of these settings are untouched
Setting | Value | Description |
Enabled Replay Buffer | Checked |
This was on by default, I have not changed it |
Maximum Replay Time (Seconds) | 20 |
Setting | Value | Description |
Sample Rate | 48khz |
Set this to the highest value your audio will support! The higher the hertz, the better the audio quality |
Channels | Stereo |
Especially if you have music playing |
Desktop Audio Device 1 | Disabled |
|
Desktop Audio Device 2 | Disabled |
|
Mic/Auxiliary Device 1 | Disabled |
|
Mic/Auxiliary Device 2 | Disabled |
|
Mic/Auxiliary Device 3 | Disabled |
You can select your microphone for either of the mic/auxiliary slots if you chose to. Doing so will make your microphone available to all scenes without adding it.
Setting | Value | Description |
Base (Canvas) Resolution | 1920x1080 |
1080p. Most providers support 1080p and 4k. If you stream in 4k the resolution on the viewer’s screen will be different if they don’t have a 4k monitor due to scaling. |
Output (Scaled) Resolution | 19020x1080 |
HD |
Downscale Filter | Bicubic (Sharpened scaling, 16 samples) |
Bilateral, best for CPU. Lanczos, is better quality but CPU intensive. Bicubic, is the in-between. |
FPS Type | Common FPS Values |
This will defaults to Frames per Second based on the resolution |
Common FPS Values | 60 |
You can lower this to 30 if you are seeing an issue with your CPU. For streaming code, 30 is an acceptable value. |
Setting | Value | Description |
Process Priority | High |
I have this set to high because I am using a separate machine for streaming otherwise you should probably chose normal |
Setting | Value | Description |
Color Format | NV12 |
|
YUV Color Space | 601 |
|
YUV Color Range | Partial |
|
Force CPU as render device | checked |
I wanted to offload the encoding of video to the video chip |
Some of your default settings for video might be different depending on your locale.
I do not have a separate device to ‘monitor’ the stream audio
Setting | Value | Description |
Audio Monitoring Device | Default |
|
Disable Windows audio ducking | unchecked |
Setting | Value | Description |
Filename Formatting | %CCYY-%MM-%DD %hh-%mm-%ss |
This is the default value. I kept it because I keep all of the recordings… so far. |
Overwrite if file exists | unchecked |
These are the default values. I imagine that if you are streaming your gaming, you might want to provide some replay functionality for a ‘sweet kill’. No one said ever, ‘That was a great use of a lambda, let me replay that’ :smile:
Setting | Value | Description |
Replay Buffer Filename Prefix | Replay |
|
Replay Buffer Filename Suffix | empty |
These are the default values.
Setting | Value | Description |
Enable | Unchecked |
|
Duration | 20 |
|
Preserve cutoff point (increase delay) when reconnection | checked |
Setting | Value | Description |
Enable | checked |
|
Retry Delay (seconds) | 10 |
|
Maximum Retries | 20 |
If you have more than one Network Interface Card (nic), exampled wired and wireless. You can set Streamlabs OBS to only use one. If you have a wired connection, you should use that. Wireless can drop packets.
Setting | Value | Description |
Bind to IP | Default |
|
Dynamically change bitrate when dropping frames while streaming | unchecked |
If you notice your CPU or internet connection can not handle the load you may want this checked. Doing so, will drop the frame rate , the number of ‘snapshots’ that the view sees, which could degrade quality. If you are later posting the recording, I would keep this unchecked. |
Enable new networking code | unchecked |
Explanation |
Low latency mode | Unchecked |
This should be checked if you are seeing a lot of dropped frames or low/show upload bandwidth from your ISP. |
Setting | Value | Description |
Enable Browser Source Hardware Acceleration | checked |
Setting | Value | Description |
Enable media file caching | checked |
All of the other settings, Hotkeys, Game Overlay, Scene Collections, Notifications, Appearance, and Face Masks, I have left the default values.
Scenes in OBS provide a different way to provide content to the viewer. Do you just want to show code? Do you just want to show you? Scenes are a way to do that, and a preference. From what I have seen, no two streams are the same. I have eight scenes registered, three of them are just helper scenes that are reused as part of other scenes.
Scene Name | Use |
Overlay | Samples, provided by the theme, with some visual UI elements |
Starting Scene | I use this when I go live, to let the viewers know “I’ll be there in a minute” |
Live Share - Elgato | This is my primary scene. It shares the primary capture from the Elgato card with the video from the webcam |
Be Right Back Scene | I have not used this one yet. It’s for the times, I might need to run and get the door, or another drink, or something |
Ending Scene | This signals the users that the stream is ending and provides some social media links for them to continue the conversation. My mic is still on at this point. This might be replaced with the Just Chatting scene which I am still building. |
Just Chatting | Work in progress. This will just be me with some background images for just the conversation. |
Secret Time | I use this when I am working with secrets on screen and I don’t want to share them :smile: |
Social Media | A shared scene with all of my social media links |
Alerts | A shared scene with the OBS/Twitch alerts for new followers, subscribers, etc. |
All of my scenes and alerts use the Pure theme by Own3d for a consistent look.
Filters are like advanced settings that let you tweak your devices (sources) even more. I’m going to cover some of the Webcam and Microphone filters I use to provide a better experience.
One thing to keep in mind is that the order of the filters matter. The output of one filter is feed in as the source of the next filter. So if you have four filters, the input of filter 4 would be the output of the third, not the source data. So, you might want to play around with the ordering a bit.
To get to the filters in OBS, right-click on a source and chose Filters
.
This filter will remove or add pixels to your webcam. The webcam I use has a wide range, so I crop out a lot of it just so my face and shoulders are available.
Setting | Value | Description |
Left | 300 |
|
Top | 180 |
|
Right | 300 |
|
Bottom | 50 |
I don’t use this filter any more.
Chromakeying is used to mask out the background of your camera’s field of view. Think of the weather forecaster on TV standing in front of the weather map. That image is superimposed by a computer by replacing the Chromakeying color with the computer image. I have switched to a blue screen chromakey for my setup since my chair is blue.
You will probably mess around with these settings a lot until you get it just right. The lighting in your room/office/studio is a big contributor to this.
Setting | Value | Description |
Key Color Type | Green |
|
Similarity | 440 |
Note: I play around with this number a lot depending on the lighting in my room. I usually don’t go under 400 or higher than 450 |
Smoothness | 80 |
|
Key Color Spill Reduction | 100 |
|
Opacity | 100 |
Your personal preference. How much of yourself do you want on the screen? |
Contrast | 0 |
|
Brightness | 0 |
|
Gamma | 0 |
Semi-pro tip: Don’t wear the same color shirt as your Chromakey :smile:
These filters were based on a video I watched Best Microphone Settings for Streamlabs OBS (2020). I recommend you watch this video to learn more about the audio filters.
Setting | Value | Description |
Gain | 3.7 |
The more gain, the louder the audio is |
Setting | Value | Description |
MEthod | RNNoise |
These settings will require a lot of tweaking to get a clear sound for your recording. I find myself tweaking the Close and Open Thresholds mostly. The video above does a great job of explaining each of the settings.
Setting | Value | Description |
Close Threshold | -53 |
|
Open Threshold | -26 |
|
Attack Time | 25 |
|
Hold Time | 200 |
|
Release Time | 150 |
Setting | Value | Description |
Ratio | 10 |
|
Threshold | -18 |
|
Attack | 6 |
|
Release | 60 |
|
Output Gain | 0 |
|
Sidechain/Ducking Source | None |
That’s it! Again, your mileage may vary. These are the ‘optimal’ settings for my hardware, software, and environment.
For details on my hardware, check out this post
]]>I’ve listed out all of the equipment that I use to stream ‘Coding with JoeG’ on Twitch. While all this equipment is not necessary, it has helped me produce a reliable stream. Your mileage may vary!
I’ve learned a little bit and upgraded my hardware setup. Now I use a more powerful laptop with more memory and a better video card. I now have a Dell Precision 5540 laptop. The extra memory is key when you are using one pc. At minimum when I stream, I am running JetBrains Rider, StreamLabs OBS, and Microsoft Edge. Having the extra members keeps the operating system from swapping applications to the hard drive.
The next big change is the video card. This laptop has a higher-end (for laptops) video card, the NVIDIA Quadro T2000.
A Dell Precision 5540 with the specs of:
I no longer use a secondary computer thanks to the power of the primary streaming PC.
View on Amazon
While I could have used the built-in webcam on either laptop for the stream, the ‘Each’ webcam allows me to customize the webcam settings further. Luckily, with this camera, I don’t need to customize anything. The default, “out of the box”, settings produce an excellent quality 1080p image. Another good thing about this camera is that I can mount on top of the laptop or tripod.
View on Manufacturers Site View on Amazon
A microphone is probably the one piece of hardware that I recommend you buy, not necessarily the ATR2100. I love the sound quality of this microphone so far. Most built-in microphones are low quality and, on laptops, are next to the keyboard. These two factors lead to the microphone picking up considerable extra noise. Having a separate microphone mounted on a ‘Boom arm’ with the shock mount and pop filter reduces much noise. With this configuration, you can’t hear me type, the fan in my office, or when I accidentally bang the desk. This microphone is a great starter purchase. As a bonus, this comes with an XLR connection to plug into a mixer board and other gear if you want to get serious about your audio.
NOTE This is no longer used since I only use one PC.
View on Manufacturer Site View on Amazon
Don’t let the name ‘Game Capture’ fool you. This device does more than capture game footage. I think the initial thought behind the product was for game capture. However, this device allows you to capture audio and video via HDMI and use it as a Video Source in OBS. Setup is fairly easy and everything you need is included.
“Green Screening” or chroma keying allows software like OBS to easily separate green screens and panels from the people standing in front of them and replace those backgrounds with pretty much anything. You’ve seen chroma keying in Hollywood special effects when superheroes fly or while watching your local weather forecast when the weather person is magically standing in front of a map. Many streamers use chroma keying to drop out everything behind them, so you only see them on the stream. I use the ePhotoInc chroma key collapsible blue/green background with the Fovitec stand to hold it.
For OBS to remove or replace the background, you need to add a Chromakey
. You can check out the post ‘Coding with JoeG - Software Configuration’ for details on my configuration.
View on Amazon
View on Manufacturers Site View on Amazon
View on Manufacturers Site View on Amazon
The StreamDeck provides physical buttons that you can assign to an assortment of things. You can program scene transitions, sound effects, send tweets, and more. For me, I have it currently starting a few different applications and sending tweets.
I haven’t used the Elgato StreamDeck to the fullest extend yet. I’ll play around with it some more this weekend. I’ve since, starting using the Streamdeck a lot more, as you can see in the picture. I’ll blog about the configuration another time.
View on Amazon
I have 6 ‘Daylight’ bulbs in my home office, so the lighting is pretty good. I use these lights mostly to highlight imperfections with shadows (mainly on my face).
NOTE This is no longer used since I only use one PC.
View on Amazon
A USB hub is not required. However, because of my current configuration of laptops, I do not have enough USB ports for the microphone, webcam, Elgato card, and Stream Deck.
Note: This item is no longer available.
While not required, it is strongly recommended depending on your network configuration. The laptop hard-wired into my router (it’s super close to my desk). Having the laptop hard-wired prevents wi-fi interference and potential packet/frame dropping.
Your needs may vary. This equipment list is what I purchased to deliver an inexpensive quality stream. Depending on your hardware, you might not need anything from this list. I curated a list of the equipment for the Coding with JoeG Stream at this list on Amazon. Jeffrey Fritz, part of the LiveCoders team, also blogged about his configuration, in the Live Streaming Setup - 2019 Edition
Note If you click on an Amazon link and purchase a product, I may get a commission from Amazon. The purpose of the links is to avoid much searching and not to make money on the blog post.
]]>This Thanksgiving is going to be different for many of us. We will not be spending time with our loved ones to keep them and us safe. We won’t be lining up at Target at 5am to get that cool item that is 90% off retail. No, we won’t. Instead, we will be home keeping ourselves and loved ones safe. But if anything, the events of this year and last year have taught me to look at the Silver Lining. This Thanksgiving, there will be less frustration for me as the wife and mother-in-law battle it out for kitchen supremacy. For some, it might be skipping that argument with that crazy we all have or the fight over your favorite sportsball team. The point is, we all have something to be thankful for. Find your thing to be thankful for and focus on that instead of all the things you can’t do this year.
I am thankful that I and my immediate family are healthy and happy (for the most part). Find your happiness.
BTW, if you are home this Thanksgiving, and unable to “see” your relatives, Zoom is extending their typical 40-minute call limit this Thursday to enable families to be connected. If you just need someone to talk to, give me a call anytime or day. I’m around!
Have fun and be safe!
]]>HTTP Error 500.31 - ANCM Failed to Find Native Dependencies
Common solutions to this issue:
The specified version of Microsoft.NetCore.App or Microsoft.AspNetCore.App was not found.
This reminded of something that Scott Hunter mentioned at .NET Conf. Azure App Service supports .NET 5, just not by default.
That reminded me that I had to check the configuration of the App Service to change it to enable support for .NET 5. It’s was pretty easy to do.
Under Stack Setting, change the following
Setting | Value |
Stack | .NET |
.NET Framework Version | .NET 5 (Early Access) |
After refreshing your browser, the error should go away and the application load.
Here is what the page looks like on my site.
I host all of my presentation decks on OneDrive and make them available to all the attendees of my talks. This allows me to embed the links in any emails or sites that I want to. I embed all the talks, code links, and sample videos on each of the talks’ respective pages on my site. With my site is built using Jekyll and hosted on GitHub pages, embedding the slide decks was quite simple. Now here is the how?
First, navigate to the file you want to embed in your online (this needs to be done via the web client) OneDrive file lists and select it.
Next, you will see on the toolbar, an </>Embed
button. I encased it in red. Click on the </>Embed
button and you will see something like this.
Copy out the HTML, although for this to work, you only need the src
value. In this example, it is https://onedrive.live.com/embed?cid=406EE4C95978C038&resid=406EE4C95978C038%2179191&authkey=AFFYuImKsNsScF4&em=2
That’s it from OneDrive.
Initially, this is going to be a two-part process. It’s two-part because I have my presentations pages as a separate layout in Jekyll. It is built as a presentation layout.
First I created a collection for the presentation layout. Here is the section of my _config.yml
.
1
2
3
4
collections:
presentations:
output: true
permalink: /presentations/:name
Then, further down in the _config.yml
I added a default
section to handle the pages for the presentations. This is so I don’t have to add the layout
front matter to every presentation.
1
2
3
4
5
6
7
8
9
10
11
# Defaults
defaults:
# _presentations
- scope:
path: ""
type: presentations
values:
layout: presentation
share: true
classes : wide
author_profile: true
Next up is creating the layout. Create a folder in the root of your Jekyll site, I called mine _layout
. The underscore is important for Jekyll as it won’t ‘publish’ folders with an underscore. Then in that folder create a file and name it presentation.html
. Note: This name should match the name in the values.layout
that is defined in the defaults, without the .html
I hide some of the other parts of the HTML so that we can focus on the PowerPoint embedding.
1
2
3
4
5
6
{{ content }}
{% if page.powerPointUrl %}
<h3>Slide Deck</h3>
<iframe src="{{page.powerPointUrl}}" width="610px" height="367px" frameborder="0"></iframe>
{% endif %}
Line 1 is where the content of the presentation is displayed. More on that later.
Line 3 checks to see if there is a page attribute of powerPointUrl
that is present. To do this, we’ll create some Front Matter for our presentation. Again, more on that later.
Line 4 and 5 is where I embed the PowerPoint. I recreate the HTML that OneDrive provided.
Line 6 closes the conditional statement of line 1.
Bonus If you look at the full source code, you can embed YouTube videos also.
Now that we have created the layout for the Presentation page, in this example, let’s look at a sample presentation.
1
2
3
4
5
6
7
8
---
title: .NET 5 - What is it?
isKeynote: false
isRetired: false
sourceUrl:
powerPointUrl: https://onedrive.live.com/embed?cid=406EE4C95978C038&resid=406EE4C95978C038%2179191&authkey=AFFYuImKsNsScF4&em=2
---
We have the .NET Framework, .NET Standard, .NET Core, ASP.NET, ASP.NET Core ... do not get me started on Classic ASP or other platforms :). Where are we going with .NET? What is .NET 5? What is going to happen to these 'legacy' frameworks? Let us take a look at the past, the present, and the future of .NET. After this talk, you will have a good understanding of where Microsoft is taking the platform and where you can focus your development efforts.
Line 6 we provide the Url for the PowerPoint presentation.
You can see a more ‘complex’ sample by checking out Typescript for the Microsoft Developer
You can check out all of the ‘source code’ for the site on the GitHub repo.
If you want to see how I ‘dynamically’ generate the presentations page, checkout _pages/presentations.md
]]>I’ve been working on a side project JosephGuadagno.Net Broadcasting; I know I need a better name :smile:, for a month or so now. The project’s goal is to provide a way for me to promote talks, scheduled streams, my YouTube Videos, and blog content on social media. This project is a collection of Azure Functions that perform different tasks like query the YouTube Apis, check RSS feeds, post to Facebook feeds, etc. The project, more so the components that make it up, has started to get quite large. Add the fact that the solution is running in Azure on someone else computer, I wanted to add some logging and telemetry to the components to know what was happening and when. I added NLog to the project to help with the logging. Azure Monitor, aka Application Insights, is coming next :smile:. If you need to add logging to your application, I suggest you take a look at NLog, it’s pretty easy to use once you get the configuration right. And here lies the reason for blog posts…
If you haven’t used NLog before, like most logging frameworks, it needs a configuration to run, this configuration is typically in a nlog.config
file, although with some updates to the project, you can use an appsettings.json
file. Honestly, I think the project is trying to catch up to the configuration system in ASP.NET Core. The framework looks for the nlog.config
file in the same folder as the assembly is being executed from. For Azure Functions, that folder varies depending on where you are running it. If like me, you are running it through JetBrains Rider, it runs it from a really long directory (the installation directory of the Azure Functions framework/tools). This location is not ideal, in my opinion, for the nlog.config
. I would prefer it in the application directory. So far there really isn’t a problem, NLog provides the ability to change the location of the configuration file. I tried that using the following code in my Startup.cs
1
2
3
4
5
6
7
public Startup()
{
LogManager.Setup()
.SetupExtensions(e => e.AutoLoadAssemblies(false))
.LoadConfigurationFromFile(Environment.CurrentDirectory + Path.DirectorySeparatorChar + "nlog.config", optional: false)
.LoadConfiguration(builder => builder.LogFactory.AutoShutdown = false);
}
Oh, did I mention, I was using the new Azure Functions Dependency Injection. On line 5, I tell NLog to load the configuration from Environment.CurrentDirectory + Path.DirectorySeparatorChar + "nlog.config"
which on my local machine translates to something like c:\MyProjects\FunctionApp\nlog.config
. Running a few tests locally, everything was working and I was getting logs. Once I committed the code, the code was published to the Azure Function via the GitHub Action, I noticed I wasn’t getting any logs. In fact, I was getting an error message:
System.Private.CoreLib: Exception has been thrown by the target of an invocation. NLog: Failed to load NLog LoggingConfiguration. ‘D:\Program Files...’
I’ve left out the full file path that was provided.
This led me to use the Azure Functions Advanced Tools (Project Kudo) to start some directory browsing to make sure I copied the nlog.config
file to the root of the application.
The file is there! What could be the problem? The first clue was the exception thrown about searching in the path ‘D:\Program Files…'. Based on the code sample above, line 5, Environment.CurrentDirectory + Path.DirectorySeparatorChar + "nlog.config"
I should be pulling the configuration from D:\home\site\wwwroot
but I wasn’t. Now, I could have hard-coded the value on line 5 and stopped there but I didn’t. After some research, it looks like I could get the ExecutionContext
to get the application directory that the function is running in. But as the docs say…
The dependency injection container only holds explicitly registered types. The only services available as injectable types are what are setup in the Configure method. As a result, Functions-specific types like BindingContext and ExecutionContext aren’t available during setup or as injectable types.
Well, that stinks! Back to the drawing board! Now I had to figure out how do I get the current folder based on where the code is running, reflection isn’t easy to get right, the folder structure varies depending on the tools you are using, Rider, Visual Studio, Visual Studio Code, etc. I needed to map the code locally to something like C:\MyProjects\FunctionApp
on Windows, ~/Projects/FunctionApp
on Mac, and d:/Home/site/wwwroot/
for Azure. There was no Environment variable to do this so this is what I came up with
1
2
3
4
5
6
7
8
var localRoot = Environment.GetEnvironmentVariable("AzureWebJobsScriptRoot");
var azureRoot = $"{Environment.GetEnvironmentVariable("HOME")}/site/wwwroot";
var _applicationDirectory = localRoot ?? azureRoot;
LogManager.Setup()
.SetupExtensions(e => e.AutoLoadAssemblies(false))
.LoadConfigurationFromFile(_applicationDirectory + Path.DirectorySeparatorChar + "nlog.config", optional: false)
.LoadConfiguration(configurationBuilder => configurationBuilder.LogFactory.AutoShutdown = false);
Line 1 determines the path if running locally. The environment variable AzureWebJobsScriptRoot
is null
when running on Azure.
Line 2 creates the path when running in Azure. The environment variable HOME
points to the folder the App Server that is running your Function(s) is running out of.
Line 4 creates the _applicationDirectory
variable based on whether or not localroot
is null.
This solved the problem running locally and in Azure. I hope in version 4 of the Azure Function SDK, the directory, environment variables, and how settings are handled is a little more consistent.
It looks like the ExecutionContext
works with a slight modification. I placed this code in the Configure
method to come up with the application directory. This works both locally and when in an Azure Function.
1
2
3
var executionContextOptions = builder.Services.BuildServiceProvider()
.GetService<IOptions<ExecutionContextOptions>>().Value;
var currentDirectory = executionContextOptions.AppDirectory;
Now, I can initialize NLog in the Configure
method like I can other classes and DI. Note I just initialize NLog, it is not recommended to log in the Constructor.
1
2
3
4
LogManager.Setup()
.SetupExtensions(e => e.AutoLoadAssemblies(false))
.LoadConfigurationFromFile(currentDirectory + Path.DirectorySeparatorChar + "nlog.config", optional: false)
.LoadConfiguration(configurationBuilder => configurationBuilder.LogFactory.AutoShutdown = false);
Well, I hope this helps you and save you a few hours getting NLog to work in Azure Function and potentially any file/folder work in Azure.
You can declare the _applicationDirectory
variable as a private variable in the Startup class of your function and then config the Configuration system to load environment-specific settings with the following code.
Now you can use the currentDirectory
variable that is declared in the Configure
method to get the directory/path to your configuration file
1
2
3
4
5
6
var config = new ConfigurationBuilder()
.SetBasePath(currentDirectory)
.AddJsonFile("local.settings.json", true)
.AddUserSecrets(Assembly.GetExecutingAssembly(), true)
.AddEnvironmentVariables()
.Build();
My Startup.cs
]]>This blog post demonstrates connecting to the Contact API that I have been building on my stream Coding with JoeG. The API project can be found in the Contacts Repository. While the code will build, it will not run because you will need the client application registered in Azure with the correct permissions.
You can view the following Introduction to React Native video recording from my stream. You can also watch me code this live on Building the API Client and Authentication.
Completed code repository at https://github.com/jguadagno/contacts-react-native-client
This blog post assumes that you have certain tools installed and are familiar with them. If you don’t have the tools installed, I have provided a quick guide and links to get you started.
Visit the Node.js installation for details on installing Node.js.
After node.js is installed, you can optionally load the required packages that will be used later ahead of time so the installation goes faster.
1
npm install -g expo-cli msal @openapitools/openapi-generator-cli @react-native-community/masked-view react-native-gesture-handler react-native-reanimated react-native-screens react-native-safe-area-context @react-navigation/native @react-navigation/stack axios url
or with Yarn
1
yarn global add expo-cli msal @openapitools/openapi-generator-cli @react-native-community/masked-view react-native-gesture-handler react-native-reanimated react-native-screens react-native-safe-area-context @react-navigation/native @react-navigation/stack axios url
Installing React Native
Once installed, add the expo cli to your project.
1
yarn add expo-cli
To generate the React Native application, execute the following commands in your terminal or command prompt. Replace my-app
with the name you want to call your application.
1
2
3
4
# Create the React Native project using the TypeScript template
npx create-react-native-app my-app --template with-typescript
# Install additional React Native tools: React Navigation
yarn add @react-navigation/native @react-navigation/stack
The OpenAPI Generator is used to generate an API client for the React Native application to use. You can pick from a few different generators but for this example, I am using the Axios template named ‘typescript-axios’.
Using Yarn, we can create a command that will generate our API client fairly easily. Open up a command prompt or terminal in the directory of the project and execute the following commands. Note: change my-app
to the application name of your React Native project.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# CD into the project
cd my-app
# Add axios project dependencies
yarn add axios url
# Add client generator (as Dev dependency)
yarn add -D @openapitools/openapi-generator-cli
# Create api folder (for everything API related). It should be lowercase to avoid warnings
mkdir api
# Download Open API file to api folder
curl https://cwjg-contacts-api.azurewebsites.net/swagger/v1/swagger.json > ./api/openapi.json
# Add generator script to package.json
npx add-project-script -n "openapi" -v "openapi-generator-cli generate -i ./api/openapi.json -g typescript-axios -o ./api/generated"
# Generate the client (requires JDK installed)
yarn openapi
If your API changes and you need to update your API client, just execute the following command
1
yarn openapi
to generate a new client
For the sake of this blog post, I named my application Contacts with it using ContactsApi as the name of my API client. You can replace
Contacts
withmy-app
or whatever you called your application. This blog post also assumes we are working on a brand new React Native Application.
The first step is to expose the API client to our React Native application.
In the ./Contacts/api
folder, create a new file index.ts
with the following contents
1
2
3
4
5
import { ContactsApi } from './generated';
export default {
Contacts: new ContactsApi()
};
Now that the API is exposed, let’s use the API. Navigate to the App.tsx
file, in the project root folder, import the Api
. Note: if you are using an IDE, it could insert this import
for you.
1
import Api from './api'
Now we want to consume the API. Replace the HTML in the return()
block with this.
1
2
3
4
5
<View style={styles.container}>
<Button title="Hello" onPress={() => {
var list = Api.Contacts.contactsGet();
console.log("Hello");}} />
</View>
On the third line, you will see that we are calling Api.Contacts.contactsGet();
. We are not doing anything with the list
variable yet. We are just getting the API connected. Now the call to the API will not work yet because we are still wiring it up. If you want to see it working, you can execute the following command:
1
yarn web
in your terminal to start up the application. Clicking on the ‘Hello’ button will log the word ‘Hello’ to the developer tools console.
Now we have to wire up the security. To do that we first need to install the NPM Package for Microsoft Authentication Library (MSAL). Note: Make sure you are in the Contact
directory in your terminal session.
1
yarn add msal
Once the package is done installing, create a folder in Contacts
called msal
. This folder will be used for the classes that interact with the Microsoft Identity library. You will need to create three files in the msal
folder.
This file contains the class the represents the request with scopes.
1
2
3
4
export default interface IRequestConfiguration {
scopes: string[];
state?: string;
}
This file contains the configuration for the library. You will need to edit this class based on your Azure and authentication configuration. You’ll need to at minimum change the config.auth.clientId
to match that of the Azure client id.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const MsalConfig = {
config: {
// Azure Credentials
auth: {
clientId: '', // Replace with your client id
authority: "https://login.microsoftonline.com/common",
redirectUri: "https://localhost:19006/Auth",
navigateToLoginRequestUrl: false,
validateAuthority: false
},
cache: {
cacheLocation: "sessionStorage" // session storage is more secure, but prevents single-sign-on from working. other option is 'localStorage'
} as const
},
// The default scopes are listed here since the scopes for individual pages may be different
// Replace there with any default scopes you need for your application.
defaultRequestConfiguration: {
scopes: ["scope1", "scope2"]
}
}
export default MsalConfig;
This file contains the interactions between your application and the Microsoft Identity library.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
import { UserAgentApplication, AuthResponse, AuthError } from 'msal';
import MsalConfig from './MsalConfig';
import IRequestConfiguration from "./IRequestConfiguration";
class UserInfo {
accountAvailable: boolean;
displayName: string;
constructor() {
this.displayName = "";
this.accountAvailable = false;
}
}
export default class MsalHandler {
msalObj: UserAgentApplication;
redirect: boolean;
useStackLogging: boolean;
// for handling a single instance of the handler, use getInstance() elsewhere
static instance: MsalHandler;
private static createInstance() {
var a = new MsalHandler();
return a;
}
public static getInstance() {
if (!this.instance) {
this.instance = this.createInstance();
}
return this.instance;
}
// default scopes from configuration
private requestConfiguration: IRequestConfiguration = MsalConfig.defaultRequestConfiguration;
// we want this private to prevent any external callers from directly instantiating, instead rely on getInstance()
private constructor() {
this.redirect = true;
this.useStackLogging = false;
const a = new UserAgentApplication(MsalConfig.config);
a.handleRedirectCallback((error, response) => {
if (response) {
this.processLogin(response);
}
if (error) {
console.error(error);
}
});
this.msalObj = a;
}
public async login(redirect?: boolean, state?: string, scopes?: string[]) {
if (state) {
this.requestConfiguration.state = JSON.stringify({ appState: true, state });
}
if (redirect || this.redirect) {
this.msalObj.loginRedirect(this.requestConfiguration);
} else {
try {
var response = await this.msalObj.loginPopup(this.requestConfiguration);
this.processLogin(response);
} catch (e) {
console.error(e);
}
}
}
public async acquireAccessToken(state?: string, redirect?: boolean, scopes?: string[]): Promise<String | null> {
if (scopes) {
this.requestConfiguration.scopes = scopes;
}
if (state) {
this.requestConfiguration.state = JSON.stringify({ appState: true, state });
}
try {
var token = await this.msalObj.acquireTokenSilent(this.requestConfiguration);
return token.accessToken;
} catch (e) {
if (e instanceof AuthError) {
console.error("acquireAccessToken: error: " + JSON.stringify(e));
if (e.errorCode === "user_login_error" || e.errorCode === "consent_required" || e.errorCode === "interaction_required") { // todo: check for other error codes
this.login(redirect, state, this.requestConfiguration.scopes);
}
}
console.error(e);
}
return null;
}
public getUserData(): UserInfo {
var account = this.msalObj.getAccount();
var u = new UserInfo();
if (account) {
u.accountAvailable = true;
u.displayName = account.name;
}
return u;
}
public processLogin(response: AuthResponse | undefined) {
if (!response) return;
if (response.accountState) {
try {
var state = JSON.parse(response.accountState);
if (state.appState) { // we had a redirect from another place in the app before the authentication request
window.location.pathname = state.state;
}
} catch {
console.log("couldn't parse state - maybe not ours");
}
}
}
}
Outside of the initial changes to MsalConfig.ts
, you shouldn’t have to change these files once you create them.
I’ve covered how to register/create an application in Azure before. If you haven’t done this before, check out ‘Working with Microsoft Identity - Registering an Application’
Once you created the application, update Contacts\msal\MsalConfig.ts
with the correct client id.
To verify that we have the authentication configuration working we are going to build a screen that will use the MsalHandler
to interact with the Microsoft Identity library and services. Create a folder called screens
, the folder structure is totally optional. I typically break out the folder structure of my applications based on functionality, so screens makes sense to me. We are going to create a file called Auth.tsx
. This screen will serve two purposes right now, the first is to log you into the application, the second is to show you what is in the token that Microsoft Identity library is using.
NOTE You should not use most of this code in production. Do not show your tokens or credentials in your application
Paste in the following contents into the newly created auth.tsx
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
import React from 'react';
import { Button, View, Text, FlatList, StyleSheet } from 'react-native';
import MsalHandler from '../msal/MsalHandler';
export class Claim {
key: string;
value: string;
constructor(key: string, value: string) {
this.key = key;
this.value = value;
}
}
export default class Auth extends React.Component {
msalHandler: MsalHandler;
accountAvailable: boolean;
constructor(props: any) {
super(props);
this.msalHandler = MsalHandler.getInstance(); // note this returns the previously instantiated MsalHandler
this.handleClick = this.handleClick.bind(this);
this.accountAvailable = false;
}
state = {
claims: Array<Claim>(),
}
componentDidMount() {
var account = this.msalHandler.msalObj.getAccount();
if (account) {
this.accountAvailable = true;
}
if (this.accountAvailable) {
this.parseToken(this.msalHandler.msalObj.getAccount().idToken);
}
else { }
}
parseToken(token: any) {
var claimData = Object.keys(token).filter(y => y !== "decodedIdToken" && y !== "rawIdToken").map(x => {
return new Claim(x, Array.isArray(token[x]) ? token[x].join(",") : token[x].toString());
});
this.setState({ claims: claimData });
}
render() {
if (this.accountAvailable) {
return (
<View style={styles.container}>
<Text>User Claims</Text>
<FlatList
data={this.state.claims}
renderItem={(claimData) => (
<View style={styles.row}>
<Text style={styles.column}>{claimData.item.key}</Text>
<Text style={styles.column}>{claimData.item.value}</Text>
</View>
)} />
</View>
)
} else {
return (
<View style={styles.container}>
<Button onPress={this.handleClick} title="Login" />
</View>
)
}
}
async handleClick(e: any) {
e.preventDefault();
console.log("clicked");
await this.msalHandler.login();
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: 'white',
justifyContent: 'center',
flexDirection: 'row',
flexWrap: 'wrap',
alignItems: 'flex-start'
},
row: {
display: 'flex',
flexDirection: 'row',
flexWrap: 'wrap',
width: '100%'
},
column: {
display: 'flex',
flexDirection: 'column',
flexBasis: '50%',
flex: 1
}
});
Now, I’m not going to explain every line of the file just highlight the parts that are important to the authentication.
Line 3, imports the MsalHandler
so it is available to this screen.
Line 5, declares a Claim
class which is used to display the claims. This is not needed for authentication but helpful while you are troubleshooting.
Note Do not include the Claim
class in the production version of your application.
Line 21, we get an instance of the MsalHandler
.
Now in the componentDidMount()
function, lines 30-39, we attempt to get the account for the current user via a call to msalHandler.msalObj.getAccount()
on line 31. If account is not undefined
, we set the local variable accountAvailable
equal to true
.
The parseToken
function, lines 41-46, are used to split the token into a key-value pair for display. Again, this is for debugging and testing purposes. DO NOT include this code in your production application.
The render
method handles the display logic which differs if the user is logged in or not.
The handleClick
function, lines 72-76, performs the login by executing the msalHandler.login()
method.
Now let us update the application to call the authentication page. Replace the contents of App.tsx
with the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import React from 'react';
import { Button, StyleSheet, Text, View } from 'react-native';
import Api from './api'
import { NavigationContainer } from '@react-navigation/native';
import { createStackNavigator } from '@react-navigation/stack';
import Auth from './screens/Auth';
import MsalHandler from './msal/MsalHandler';
const msal = MsalHandler.getInstance();
var user = msal.getUserData();
function HomeScreen({navigation}) {
return (
<View style={styles.container}>
<Text>Welcome new followers!</Text>
<Button title="Hello" onPress={() => {console.log("Hello");}} />
<Button title={user.accountAvailable ? "Claims for " + user.displayName : "Login"} onPress={() => navigation.navigate('Auth')} />
</View>
);
}
const Stack = createStackNavigator();
export default function App() {
return (
<NavigationContainer>
<Stack.Navigator>
<Stack.Screen name="Home" component={HomeScreen} />
<Stack.Screen name="Auth" component={Auth} />
</Stack.Navigator>
</NavigationContainer>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: 'white',
alignItems: 'center',
justifyContent: 'center',
},
});
Again, I’m not going to explain the whole file just the parts related to the authentication.
Line 7 and 8, we import the new Auth screen and the MsalHandler
library.
Line 10, we create an instance of the MsalHandler
.
Line 11, we attempt to get the user data from getUserData
function of the MsalHandler
. The call checks to see if there if the user has authenticated and has an access token already.
Line 18, determines whether to display the link to the authentication page or the user information. If the user is already authenticated, the button will change to the users’ name, in my case, the button will be titled Claims for Joseph Guadagno, if the user is not logged in or authenticated, the button will display Login.
Start the application. Execute the following command from the console.
1
yarn web
If this is the first time you are running this application with this client id, or the first time running the application with the assigned client id with the current user, Azure Active Directory will prompt you to log in and accept the permissions for the application. In our sample, we are asking to call the APIs on behalf of the signed-in user. If prompted,
If the login was successful, the login button should change to Claims for … where the … is your name.
If you click on the Claims for … button, you will the claims that was sent back.
Now that we verified the API and application authentication works, we are going to need to update the API client to add the required headers for authentication and change the base URL.
Open up Contacts/api/index.js
and replace the contents with the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import Axios from 'axios';
import MsalHandler from '../msal/MsalHandler';
import { ContactsApi } from './generated';
const msalHandler = MsalHandler.getInstance();
const instance = Axios.create({baseURL: 'https://localhost:5001'});
instance.interceptors.request.use(
async request => {
var token = await msalHandler.acquireAccessToken(request.url);
request.headers["Authorization"] = "Bearer " + token;
return request;
}
);
export default {
Contacts: new ContactsApi(null, 'https://localhost:5001', instance)
};
As you can see, this is the first name where we are using the Axios library and this is the primary reason for using the Axios library and not the native React Native fetch function. Axios provides us with the ability to intercept requests. We want/need to intercept the request so we can add the required authentication header bearer token.
Line 5, we get an instance of the MsalHandler
.
Line 7, we create an instance of the Axios library for the URL of https://localhost:5001
.
Lines 8-14, we create an interceptor for any request to https://localhost:5001
. This interceptor will inject the token given from the Microsoft Identity library (line 10) call to acquireAccessToken
and create a new authorization header with the bearer token, line 11;
Line 17, changes the initial creation of our API client to use the base URL of localhost:5001
and the instance of Axios created on line 7.
NOTE You’ll want to change the URLs to your production URLs
Now that the API client has been updated let’s add a new button to the application so that we can make some APIs call to verify the authentication is working.
Open up the App.tsx
file and add the following button.
1
2
3
4
5
6
7
<Button title="Number of Contacts" onPress={async () => {
var contactList = await Api.Contacts.contactsGet();
console.log("Number of contacts: " + contactList.data.length);
var contact = await Api.Contacts.contactsIdGet(1);
console.log("First Contact Name: '" + contact.data.fullName + "'");
}} />
If the application is not already started, start it. Then:
Now, if you had access to the API, you would have seen Number of contacts: 5 and First Contact Name: ‘Joseph Guadagno’ in the developer tools console.
Wow, that was a lot. As you can see, the initial setup is a little challenging but once it is done there is nothing you really need to do except for build your application.
A year ago today, the cardiologist successfully cleared the blockage and got me on the path to a healthier life.
There a quite a few factors that contribute to clogged arteries and/or being out of shape. Here are some of the common:
Looking at the list, I checked off seven of the eight.
These check marks :white_square_button: are not something I am not proud of. I am quite disappointed :disappointed: in myself for letting myself get there. I know I could have gone with the excuse that I was doing a lot of traveling with work, public speaking, I was up late with work, preparing presentations, etc., the list could go on. But those are just excuses! I let myself go. Those that know me to know I have a few mottos or things I live by. One of my favorites is
Those that know me, know that I have a few mottos, or things I live by. One of my favorites is
I work to live, I don’t live to work!.
Somewhere along the way, I lost sight of that, and I guess I needed this wake-up call.
Thanks to the proverbial kick in the pants, and some medication, I am back on track. I watch what I eat now. I have five smaller meals a day that are heart-healthy, including vegetables, fruits, etc. I’ve avoided fried foods, foods high in salt, and most sweets. I’ve stopped drinking alcohol. I try to get around 15,000 steps a day now, which is really hard to be stuck at home.
You’ll notice I used the word to avoid. I did not eliminate fried foods, sweets, etc. I just have them way less frequently. The reason being is you still have to live, and cutting everything out usually leads to failure.
Looking at the previous list
Looking at the list, I now have two of the eight checked. Can’t change my age or family health history.
Times are hard now, more than ever, for most of us. You must take a step back and look at your life and situation to see what decisions you are making, and they impact yourself and those around you. Remember, you could not only be affecting yourself both those around you.
You, too, can work to live and not live to work.
This event, a year ago, was my reminder that…
I work to live. I don’t live to work!
]]>To get started, you need to add a reference to Microsoft.Azure.Functions.Extensions
package in your Azure Functions project.
1
Install-Package Microsoft.Azure.Functions.Extensions -Version 1.0.0
Now you need a class to register your services. I use Startup
to be consistent with ASP.NET Core. You can create whatever class name you want.
In that class, you need to inform the Azure Function SDK that this class is the startup class. To do this, add the assembly attribute FunctionsStartup
to the class file.
1
[assembly: FunctionsStartup(typeof(Startup))]
This will require the following using statement.
1
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
Have your class inherit from FunctionsStartup
. Doing so requires your class to override the Configure
method. The Configure
method is where you register the services that you would like to inject. Since Azure Functions relies on the .NET Core Dependency Injection features, you can choose any supported service lifetimes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using MyFunctions.Samples;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection.Extensions;
[assembly: FunctionsStartup(typeof(Startup))]
namespace MyFunctions.Samples
{
public class Startup: FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.TryAddSingleton<ISettings>(s => new Settings());
}
}
}
In this example, I am registering the ISettings
class as a singleton class within the dependency injection system.
Now navigate to the class with your function.
NOTE You’ll have to remove the static
modifier from the class and your function.
At this point, you can use directly inject the registered services.
1
2
3
4
5
6
7
private readonly ISettings _settings;
// Class Constructor
public MyFunctionApplication(ISettings settings)
{
_settings = settings;
}
Then you can use _settings
anywhere in your code. Like this example:
1
log.LogInformation($"Value from Settings class='{_settings.MySetting}'");
That’s it. Pretty easy. I’ve included links to the Azure Function documentation and a sample repository that I used it below.
The good news is the learning does not need to stop! There are plenty of virtual conferences happening everywhere; many of them are free. A lot of speakers have started live code coding like me https://jjg.me/stream.
I’ll try and get Desert Code Camp kicked off when it is safe to do so.
I hope you understand.
]]>Before we assign a role, we should take a look out what Azure RBAC is. Azure RBAC, or Azure Role-Based Access Control, is an authorization system built on Azure Resource Manager that provides fine-grained access management of Azure resources. It allows you to create roles or use predefined roles for your applications.
Azure RBAC includes several built-in roles that you can use. The following lists four built-in roles. The first three apply to all resource types.
If you don’t find a role that fits your needs, you can create custom roles. From what I have found, the default roles are adequate for my use.
Assigning a role to an application assigns a set of permissions to the Azure resource for the given application. In the sample below, we are going to assign the Storage Blob Data Contributor
role to our application.
In the Azure Portal, navigate to the resource that you want to provide access to and click on ‘Access control (IAM’) on the left menu.
There are two ways to add the role.
Enter the following:
Name | Value | Description |
Role | Storage Blob Data Contributor |
Add what makes sense for your application. Not sure what setting to use, hover over the ‘i’ or check out the permissions mentioned earlier in this post. |
Assign access to | Azure AD user, group, or service principal |
|
Select | The name of the application | The default will be your user id. Type in the first couple of characters of the application |
If you want to check what applications/users have access to a given resource
Underneath ‘Find’, choose the type of managed identity you are want to check.
The default of Azure AD user, group, or service principal
should be enough. If you have a lot of resources, you can narrow the search results down by choosing another identity type.
Once you see the resource you are wanting to check roles on, click it and you will see any permissions assigned. In this example, there were no permissions assigned.
You can view all of the roles assigned to a given resource in Azure.
This will list all of the registered applications and/or users that have access to this application.
The number 1 on the image tells us how many roles we have assigned in our subscription, not for this resource.
The number 2 on the image, provides you the ability to narrow down the results. In this case, I have it filtered to those applications/users that have the Storage Blob Data Contributor
role.
The number 3 on this image, lists all of the applications/users that match the filters above.
Your setup may vary depending on the IDE you are using, Visual Studio, JetBrains Rider, IntelliJ, Visual Studio Code, etc. I’m going to show you how to set up your Environment variables to use the DefaultAzureCredentials
. For this, you will need the Application (Client) ID, Directory (Tenant) ID, and Client Secret (password) obtained from registering your application with the Azure portal. If you need to register an application, check out the post Register an application.
Name | Corresponding Value | Value |
AZURE_CLIENT_ID |
The Azure application/client id | 6c04f5c5-97f7-486d-bbb2-eeeeeeeeee |
AZURE_CLIENT_SECRET |
The client secret/password | QPxyBvw3.UE8Bw6AJAt63pWx~BB40deded |
AZURE_TENANT_ID |
The directory/tenant id | bee716cf-fa94-4610-b72e-5df4bf5a9999 |
NOTE These are not real values! :smile:
NOTE Depending on your IDE, Terminal, etc, you may need to restart it after updating these values.
The procedure may vary depending on your environment/shell. For ZSH/bash, add the following your profile.
1
2
3
export AZURE_CLIENT_ID=6c04f5c5-97f7-486d-bbb2-eeeeeeeeee
export AZURE_CLIENT_SECRET=QPxyBvw3.UE8Bw6AJAt63pWx~BB40deded
export AZURE_TENANT_ID=bee716cf-fa94-4610-b72e-5df4bf5a9999
Any questions, please feel free to send me and email or tweet. Like what you see, watch my stream and/or subscribe to my YouTube channel.
I’ve been following the pattern of creating a dedicated test application to validate that everything works locally. By Application, I don’t mean an executable or javascript application, I mean registering an application with the Microsoft Identity Platform.
There are two ways in which you to do this, the Azure Command Line Interface (CLI) or the Azure Portal. I’ll demonstrate both.
If you don’t have the CLI installed and prefer the command-line, check out the installation instructions.
To register your application with Azure using the Azure CLI, open up Terminal, Bash, Command Prompt, ITerm, or whatever your preferred command prompt is.
First, you need to log in with the command line.
1
az login
Once logged in, the next step would be to create the Azure Active Directory service principal. Creating the service principal registers the application. You can use the ad sp command, which stands for ‘Active Directory’ ‘Service Principal’. We are going to use create-for-rbac
sub-command. Documentation
The command looks similar to this.
1
2
3
4
az ad sp create-for-rbac `
--name "<name>" `
--role "<role>" `
--scopes /subscriptions/<scope-subscription>/resourceGroups/<scope-resource-group>/providers/Microsoft.Storage/storageAccounts/<scope-resource-storage>
NOTE This should be entered in one line, or use the backtick
Replace the following ‘tokens’ with your actual values
Name | Description |
name |
The name for the application |
role |
The Azure Active Directory role you want to assign to this application |
scope-subscription |
The subscription id of you Azure subscription |
scope-resource-group |
The resource group that the storage container belongs to |
scope-resource-storage |
The Azure Storage container you want to grant access to |
Since the name
and role
can contain spaces, you should wrap them in double quotes (“).
Assuming you have the authorization and syntax-correct, the call will return a JSON file that looks like this:
1
2
3
4
5
6
7
{
"appId": "6c04f5c5-97f7-486d-bbb2-eeeeeeeeee",
"displayName": "My Test Application - Local Development",
"name": "https://MyTestApplication.LocalDevelopment",
"password": "QPxyBvw3.UE8Bw6AJAt63pWx~BB40deded",
"tenant": "bee716cf-fa94-4610-b72e-5df4bf5a9999"
}
Note I’ve changed the values here from the values I received.
At this point, you’ll want to save values of the appId
, password
, and tenant
. Once you close your terminal you will now be able to retrieve the password again.
You can find more about this approach on Azure Documentation site
If you are like me, you like to do most of the work in the portal, although I find myself using the command line more. Let’s take a look registering your application with the portal.
Start with signing into the portal and navigate to your Azure Active Directory. Look for the ‘App Registrations’ blade in the menu on the left and click on it.
Click on ‘+ New registration’
Name | Value | Description |
Name | my application |
The name you want to identify with this application |
Supported account types | Accounts in any organization... |
Chose the type that fits your needs |
Redirect URI (Optional) | This is required if you are going to be using the application to sign in. I’m leaving this blank for this application |
You will be presented with the information around the application. Copy down the ‘Application (client) ID’ and the ‘Directory (tenant) ID’ for use later.
rbac
but I don’t think the name matters.NOTE This is your only opportunity to copy it. If you don’t, you will need to recreate it.
The next step would be to configure your workstation to use the credentials of the newly registered application.
]]>