Enrolling a Device in Samsung Know Through the Browser

I’ve been exploring Samsung Knox. For enrollment on my test devices, I had initiated a hard reset of the phone and used the secret gesture to get the device in enrollment mode (draw a plus on the initial welcoming screen). While reading through Knox documentation, I encountered a support document with a URL I had not seen before. https://me.samsungknox.com.

As it turns out, by going to this site you can enroll a device without performing a hard reset. This initiates Bluetooth Enrollment. You do need to have a device sitting by with Knox Deploy installed to push the profile to the device.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

日本語のC++ (Japanese Language C++)

I came across something that I thought was cool, though not something I would ever use. It was a set of C++ defines for Japanese language coders that let’s them write in C++ using Japanese words in the place of various C++ control statements and key words. The repository for this project can be found at https://gist.github.com/HerringtonDarkholme/2dbb2a1ec748a786f54908320447b3dd. Here are a few lines from the code.

#define  エスティーディー std
#define アイオーストリーム <iostream>
#define  ユージング using
#define イフ if
#define インクルード #include
#define イント int

You can probably guess what is going on here, but I will explain. The Japanese characters here are Katakana. They are commonly used to phonetically spelling out words from other languages. The very first define here is kind of clumsy. It spells out how we would say std. As in ess-tee-dee. The rest of the defines spell out the English pronunciations of the words. I found out about this by way of the following tweet, which showed some C++ written using a substantial amount of Japanese.

While it looks cool, I have a preference of minimizing my use of defines in my C++ code. But I’m also not a native 日本語 speaker. I imagine this provides some advantages for someone that is.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

How Do the Samsung FlipSuit Cards Work?

I’ve been looking at the Samsung FlipSuit cards lately. For the uninitiated, these are cards that one slips into a case that cause the phone to also change the content of its inner and outer screens to match the theme on the card. You can find these for the Galaxy Flip 4/5, and more recently the Galaxy S24. A question I’ve seen asked in reading about these is “How do they work” and “Can I make my own.” While I don’t have sufficient technical data needed to enable you to go off and make your own, I do have some knowledge of how these work. These are not the first iteration of such a system. The earliest form of those date back to 2018.

Galaxy Friends: An Ancestor of FlipSuit

At the Samsung Developer’s Conference from 2018, one of the technologies and solutions on display was Samsung Theme Studio. With Theme Studio, a creative could package changes to the UI to give the phone an entirely new look and feel. These themes are packaged in an APK (like other Android applications) and available for download in the Samsung App Store. This tool is relevant to another offering released that same year.

At the same conference, Samsung showcased the Galaxy Friends Accessory program. Under this program, someone could make an NFC enabled and associate that case with a Theme and other content. The theme would automatically be applied just by putting the case on a phone. In addition to the theme, content and applications could be associated with the case. Someone could get exclusive access to content (that could be changed as frequently as once a day) through having this case. It’s not hard to draw a line between this and the Galaxy FlipSuit cards.

Where can you find the Galaxy Friends program now? I don’t think that you can. The email address associated with it no longer exists. Outside of a few forum posts, there is no mention of it on Samsung’s site at all. The strongest remaining items of evidence that it existed are in the forms of a couple of Amazon listings for Galaxy Friends cases for the Note 10 (Ironman Case, Spiderman Case). Reviews for it make mention of the NFC Chip and the theme that gets applied to the phone.

Does that mean Galaxy Friends is dead and gone? Well, no. Searching for that name yields few results, but searching for the functionality is a different matter.

Knox Configure

Over the years, Samsung has moved its products and services related to device management, configuration, and security under the Knox branding. While you can find some mention of Knox across a broad range of products (don’t be surprised if you find a Wi-Fi enabled clothing washer with Knox) we are most interested in the services on mobile devices. I’ve found that some are confused on what Knox is because of the broad list of products on which it can be found. The specific Knox service in which we have interest is Knox Configure.

Knox Configure provides functionality for enrolling and configuring a device even before it has been removed from its box. A device’s unique ID could be added to someone’s Samsung Knox account (usually through a Samsung reseller). A device could also be enrolled during setup, or enrolled by an accessory. For accessory enrolment, the accessory could be an NFC device, a cover, or a USB device. You might think that NFC devices and covers are identical, but more on that in a moment.

Once enrolled, a device will apply a profile. The profile could contain a variety of settings and applications to be applied to the device. There are two profile attributes of special interest here, the theme and the accessory ID. The theme, according to Samsung’s documentation, would be created by Theme Studio. There’s a bit of a break-down in process and/or documentation here. While Knox Configure points to the Theme Studio page, telling a developer they should get access to it, Theme Studio access is restricted. Years ago, anyone could get access to it. Present day, someone wishing to have access can apply during windows that open every other year (the next window is said to be in 2025). I get the impression that the Knox Configure group in Samsung and the Theme Studio groups are not completely in sync with each other. For those reading this with the hopes of making your own themes, here if your first obstacle.

The accessory ID attributes are meant to be written to the USB, NFC, or cover device. According to Samsung’s current documentation, this is something that should be done by a Samsung Accessory Partner.

These profiles are housed on Samsung’s servers and associated with a licenses that the person making the profile hat to purchase. That these profiles are tied to callbacks to a Samsung services implies that one wouldn’t be able to arbitrarily make their own card without Samsung’s involvement.

There may be other services that Samsung has for distribution of content based on an NFC cards of which I don’t know. Note that Knox is just one of them.

NFC Device Enrollment

How does the NFC device invoke enrollment? One of the messages encoded on an NFC card will be a URL. This URL starts with smdm://accessory (Samsung Mobile Device Management?). This URL will have additionally parameters attached to it. But this URI prefix is important. There is a package that is registered to handle URLs that start with this prefixed. The common name for the package is “Knox Enrollment.” The package identifier is com.sec.enterprise.knox.cloudmdm.smdms. I’ve found that if I use this URL without specifying any parameters that this process will crash and I see output on logcat. I’ve been able to figure out some of the other parameter names, but I have no idea what their potential values would be. Here are some of the other potential parameters passed.

  • countryiso_code
  • deviceProductType
  • email
  • mdm_token
  • program
  • seg_url
  • service_type
  • update_url
  • mdm_token

I speculate that if any information is to be found on potential values for these, that the information will be found in the Knox SDK Documentation. These values get passed to the Knox Enrollment service through this service. Once the device is enrolled, the associated configuration items are applied, including and themes or applications associated with the profile. However, I don’t think that information on all of the values will be found through documentation that is generally avaialble. Samsung explicitly states in one of their documents that only accessory manufacturers can inject the IDs into the accessories.

Image from Samsung documentation stating that only manufacturers can inject identifiers into accessories.

Examining a FlipSuit Card

Though I’ve seen the FlipSuit cards described as NFC cards, I’m not entirely convince of this. When I hold one of these cards to an NFC reading device I get nothing. It is possible that it uses some features of NFC unfamiliar to me. I don’t have either a Flip 5 or Galaxy S24 (the most recent phone that I have my hands on is an S23). Earlier, I mentioned that Knox Config has NFC and covers listed as separate device types. I think this is why, there is something different in the communication from an NFC card and the Flip cards. I held a FlipSuit card to the back of a Galaxy Flip 5 while an NFC examination application was running. I got no info.

What Happens when a Knox Configure License Expires

Knox Configure and cases with themes have been around long enough such that we know what happens when a theme expires. Consider the Galaxy S20 and it’s LED case. The LED case had a theme that was associated with it. That theme expired on or around January 1, 2023. Some users in a reddit thread discussed their experience.Of note, before the theme reached it’s expiration date, there was a notification for it. Note that this notification refers to “Samsung Galaxy Friends.” It’s not clear to me if the theme was expiring because the Galaxy Friends program was being sunset or because the license associated with the theme was at the end of it’s life.

Screen with notification about expiring theme.

When the date was reached, the theme automatically uninstalled.

Notification of Theme Uninstalling.

The Pathway To Making Theme Cards

I’ve mentioned obstacles and information and obstacles. If you still wanted to go through with making a theme card, what would you do? You need to get access to Theme Studio. While the next window for applying for Theme Studio isn’t until 2025, this might not be as much as a problem as some might initially think. Between now and when the window opens, you can start practing designing your themes and get a portfolio put together for the application process. After you have access to Theme Studio and are able to publish themes, you might want to become familiar with Knox Configure. You can get a 90-day subscription to Knox Configure at no cost. You won’t need to get a deep understanding of it though. For making the cards themselves though, you will need to go through a Samsung Accessory Partner. There’s no way around that of which I know, since the information I have on how data is written to the card is incomplete.

Though not in furtherance of making Theme Cards, in one of my next posts, I will be exploring a simple Knox Configuration scenario to customize a Samsung device. I’ll also be posting the code that I used for examining NFC cards. The code was written completely in HTML and JavaScript.

Additional Information


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

NASA Receives Orders to Develop a Lunar Time Zone

The Whitehouse has sent NASA orders to develop a lunar time zone. At first glance, some might think that this is silly. There are no people on the moon, why is a time zone needed? While it is easy to coordinate time on earth, coordinating time between the earth and moon is a bit different because of the effects of gravity/mass on the passage of time. The moon has less mass than the earth, resulting in time passing at a different rate. The difference in the rate of the passage of time is tiny, but can be significant when working with high-precision clocks. A clock on the moon and earth would drift apart by about 58 microseconds per day.

NASA was asked to come up with the system for Lunar Coordinated Time by the end of 2026. The efforts will include contributions of international bodies and the 36 nations that are part of the Artemis Accords. This will add to the systems of time used in astronomy such as a solar day, mean solar day, sidereal day, and the 11 time zones on Mars.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Eclipse 8 April 2024

If you are in the United States or Mexico, come Tuesday 8 April you might be able to view a total solar eclipse. Even if you are not in the path of totality, you might have an opportunity to view a partial eclipse. If you are interested in viewing, the key pieces of information you need are when it will be, where it can be viewed, and what you need for viewing. It isn’t too late just yet to get viewing equipment (if you are interested in any). But if something shows a delivery date of Monday, consider it something that will likely not arrive in time.

When and Where

The time at which you can see the eclipse will vary by location and timezone. NASA has in which you can enter a zip code and get the information for your location. Even if you are not in the path of totality, you might be able to see a partial eclipse.

LocationPartial BeginsTotality BeginsMaximumTotality EndsPartial Ends
Dallas, Texas12:23 CDT13:40 CDT13:42 CDT13:44 CDT15:02 CDT
Idabel, Oklahoma12:28 CDT13:45 CDT13:47 CDT13:49 CDT15:06 CDT
Little Rock, Arkansas12:33 CDR13:51 CDT13:52 CDT13:454 CDT15:11 CDT
Poplar Bluff, Missouri12:39 CDT13:56 CDT13:56 CDT14:00 CDT15:15 CDT
Paducah, Kentucky12:42 CDT13:59 CDT14:01 CDT14:02 CDT15:18 CDT
Carbondale, Illinois12:42 CDT14:00 CDT14:01 CDT14:03 CDT15:18 CDT
Evansville, Indiana12:45 CDT14:02 CDT14:04 CDT14:05 CDT15:20 CDT
Clevland, Ohio13:50 EDT15:13 EDT15:15 EDT15:17 CDT16:29 EDT
Erie, Pennsylvania2:02 EDT15:16 EDT15:18 EDT15:20 EDT16:30 EDT
Buffalo, New York14:04 EDT15:18 EDT15:20 EDT15:22 EDT16:32 EDT
Burlington, Vermont14:14 EDT15:26 EDT15:27 EDT15:29 EDT16:37 EDT
Lancaster, New Hampshire14:16 EDT15:27 EDT15:29 EDT15:20 EDT16:28 EDT
Caribou, Main13:22 EDT15:32 EDT15:33 EDT13:34 EDT16:40 EDT

What do you need for viewing?

There are a lot of options for equipment. The simplest option would be a pinhold projector or some variation of it. To make one of these, you only need a stiff piece of material, such as cardboard, and a white sheet of paper. Drill a hold in the center of the cardboard. If you aim the hold for the cardboard at the sun and place the paper behind it with a decent gap between them, you’ll see a circle of light on the carboard. That’s a projection of the sun. As the eclipse progresses, it will be more apparent that this is an image of the sun as you see the sape of the moon encroching on the projection. This solution isn’t much different from the earliers cameras (Camera Obscura), which were made by cutting off all light sources in a room except for one edge with a hole or lens. The image of what is going on outside the room is projected on the wall opposite to the hole.

The next step up for viewing would be to have solar viewing glasses. These are usually have cardboard frames with a thin plastic where lenses would go. Though either absorption or reflection, these glasses prevent an overwhelming majority of the light from reaching your eyes so that you can safely view the image. They are as cheap as 0.80 USD for disposable units and as much as 7 USD for some of the studier ones. If you are a T-Mobile subscriber, you might have grabed some solar glasses for T-Mobile Tuesday for free.

If you plan to use your smartphone to take pictures, you might run into a challenge if you try to put the glasses in front of your phone. Even if you zoom for your phone’s camera to use a specific lens, some phones will still switch lenses based on changes in lighting conditions. It may be easier to get a silter specificly made for the phone.

Beyond glasses, there are magnifying devices or viewing the eclipse. The least expensive of these would be eclipse binoculars. These darkened binoculars give you a much more dramativ perspective of the show. On the higher end of this, there are telescopes that have been fitted with solar filters (similar to the glasses). This would be an option to considere only if you already have a telescope. Keep in mind that if you are using a regular telescope, do not use a viewfinder to aim the scope. That would be a great way to cause eye damage. Instead, keep the caps on the viewfnder so that it is blocked and put a sheet of paper behind the viewfinder. When it is aimed properly, the sioulette of the viewfinder will be a perfect circle.

Finishing Preperations

Check your local weather before monday. If you know the weather might not be good, you may be less disapointed if and when it doesn’t allow viewing. If you have the option, make it a work from home day. Remember to block your calendar around the time of the eclipses maximum. If you plan on viewing away from home, load your glasses into your car so that they are not forgotten. If you plan to order viewing equipment, make sure you have it in hand by Sunday!


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

SVBONY SV510 Solar Eclipse Telescope

Disabling Driver Data Collection (GM Vehicles)

Some recent news articles have made more people aware of data collection for GM vehicles that could, for some, raise insurance rate. The OnStar Safe Driver program collects information about driver habits, such as maximum speeds, instances of hard breaking, hard acceleration, and seatbelt use. This information is collected by GM and available to at least two known data brokers (Lexis Nexis, Verisk) and can end up working it’s way to one’s Insurance company, affecting rates. Two instances of people unhappy with this service include a person in Florida suing both Cadillac and Lexis Nexis after his insurance rates increased and a Chevy Bolt driver whose insurance rates went up.

Did GM Automatically Opt Me Into This Program?

The OnStar Smart Driver Q&A says the following.

Do you auto enroll customers in OnStar Smart Drive?

No, we do not auto enroll customers into OnStar Smart Driver. All customers must opt-in to be enrolled.

https://www.onstar.com/support/faq/smart-driver

How Do I Opt-Out through the App

Depending on your driving style, this could be something that works to your advantage. But in either case, it is good to know how to opt out of it. Many of the GM vehicles have brand-specific variations of an application. For my vehicle, the application is myChevrolet (Android, iOS). Other variants include myGMC (Android, iOS), myBuick (Android, iOS), and myCadillac(Android, iOS).

To turn the feature off, open your GM app and go to the section titled “Trip Overviews and Insights.” (You’ll see “OnStar Smart Driver” listed there). Select it.

One the next screen, click on the geat icon in the upper-right corner to open the settings for this feature.

From there, you’ll sww a switch for turning the feature off. Use this switch to opt out of the program.

What if I don’t have the App Installed?

If you don’t have the application installed but know your GM login, you can unenroll through the website.

From the OnStar SmartDriver Q&A, the steps to perform are as follows.

To unenroll via the vehicle brand website, sign into your account. Click on “Account,” scroll down the page and click on “Data & Privacy.” Scroll down to “OnStar Smart Driver” and select “Manage Settings.” From there, switch off the “OnStar Smart Driver” toggle.

Of you want to know more about the OnStar Driver Safety Program you can read about it here. Though I thought the website was a bit light on information.

I want my collected data removed. What do I do?

You will need to contact Lexis Nexis or Verisk for more information on removal of your data. You can contact them for more information about having your data removed or seeing what data they have on you.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

My First Infrared Photograph 📷

I have a DSLR that has been modified for Infrared Photography. Digital cameras usually have filters that will block out light that is outside of the visible spectrum. Without these filters, though we cant see this light, the camera’s sensor will still respond to it. If you have an IR remote you can test this out yourself by viewing the emitter end through the camera on your phone and holding down a button on the remote.

The world looks different when you biew the IR or UV light reactions with objects. There are elements of an object that may be invisible until you view them in another spectrum, and elements that might disappear all together. I wanted to explore this more, which is why I have this modified camera. There are a few things that I’ve learned along the way. Once a camera is modified for IR shooting, it’s not very good to use for regular photography. You won’t want to take family photos or vacation pictures with such a modified camera. The viewfinder on the DSLR also becomes useless for some shooting situations; since our eyes cannot see IR light, when some filters are applied to the camera, the view through the viewfinder looks black. Instead, we must enable a live view on the camera’s display to preview the image.

The IR blocking filter being removed from the camera isn’t the only adjustment that needs to be done. Better results come from also adding a filter to the lens that blocks some of the visible colors of light. Post-processing of the photo will be necessary.

I’ve been wanting to take a photograph with the camera for the last week. But obligations to others and my work schedule preventing me from being available for some of the prime hours of the day where the sun would be where I would like for it to be. Today, between meetings, I had a chance to run outside, snap a couple of photographs of a flower, and run back inside. The image that shows at the top of this post is one of the results. That image was taken with a filter on the lens that only lets IR light pass. If I remove the filter, I end up with a picture like the following. Removing the filter and allowing visible light to come through, I get a photo that looks partially desaturated.Though color is present, the influence of the IR on the photo is discernable.

While out at “the farm” I took some pictures of some chickens. As appears to be the case with many things that are red, in infrared they appear closer to white. The chicken’s comb’s look especially white.

I’ll be taking more occasional photographs using this camera. When I do, you’ll generally be able to find them on my Instagram page.

This a lot more for me to learn. I hope to have some interesting shows to post.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Rechargeable USB-C Batteries

Over the past year, I’ve transitioned some of my devices from conventional to rechargeable batteries. I’ve used rechargeable batteries before and had generally been disappointed with them. The need for a separate charger for each battery type sometimes meant extra hardware to keep up with.

With these batteries, one of the main advantages is that they charge over a USB-C cable. Though they came with USB-A to USB-C cables for charging, I generally use the cables that and power supplies that I already have for my phones.

These don’t last as long as a conventional battery, but I’m okay with that since the charging experience is no-fuss. I often forget to turn off my voice recorder and run through batteries in it quickly. With these, it is less of a concern.

Presently, I’m using AA, AAA, and 9-Volt batteries. You can find all of these on Amazon among other places. The affiliate links for the ones that I purchased are below.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Triangle Collision (2D)

With some of the free time I had, I decided to remake a video game from my childhood. Though the game was rendered with 3D visuals, it was essentially a 2D game. I’ll need to detect collisions when two actors in the game occupy the same footprint. The footprints will usually be rectangular. But these rectangles could be oriented in any way. Detection of orthognal overlapping bounding boxes is a simple matter of checking if some pointes are within range. But if they are not oriented parallel/perpendicular to each other, then a different check must be done.

Rectangles can be constructed with two triangles. The solution to detecting overlapping rectangles is built upon finding intersection with a triangle. Let’s look at point-triangle intersection first. Let’s visualize some overlapping and non-overlapping scenarios.

Examining this visually, you can intuitively state whether any selected pair of triangles or points intersect with each other. But if I provided you only with the points that make up each triangle, how would you state whether overlap occurs? An algorithmic solution is needed. The two solutions I show here did not originate with me. I found solutions on StackOverflow and RosettaCode. But what I present here is not a copy and past of the solutions. They’ve been adapted some to my needs.

For my points and triangles, I’ll use the following structures.

struct Point
{
	float x;
	float y;	
};

struct Triangle
{
	Point pointList[3];
};

There are lots of ways to detect whether a point iw within a triangle. Before writing my own, I found a simple one on GameDev.net. This is an adjusted implementation.

float sign(Point p1, Point p2, Point p3)
{
	return (p1.x - p3.x) * (p2.y - p3.y) - (p2.x - p3.x) * (p1.y - p3.y);
}

bool pointInTriangle(Point pt, Triangle tri)
{
	float d1, d2, d3;
	bool has_neg, has_pos;
	d1 = sign(pt, tri.pointList[0], tri.pointList[1]);
	d2 = sign(pt, tri.pointList[1], tri.pointList[2]);
	d3 = sign(pt, tri.pointList[2], tri.pointList[0]);
	has_neg = (d1 < 0) || (d2 < 0) || (d3 < 0);
	has_pos = (d1 > 0) || (d2 > 0) || (d3 > 0);
	return !(has_neg && has_pos);
}

Usage is simple. Passing a point and a triangle to the function pointInTriangle results in the return of a bool value that is true if the point interacts with the triangle.

Point p1 = { 5, 7 };
Point p2 = { 3, 4 };
Point p3 = { 3, 3 };

Triangle t1 = { { { 2, 2 }, { 5, 6 }, { 10, 0 } } };

std::wcout << "Point p1 " << (pointInTriangle(p1, t1) ? "is" : "is not") << " in triangle t1" << std::endl;
std::wcout << "Point p2 " << (pointInTriangle(p2, t1) ? "is" : "is not") << " in triangle t1" << std::endl;
std::wcout << "Point p3 " << (pointInTriangle(p3, t1) ? "is" : "is not") << " in triangle t1" << std::endl;

Having Point-Triangle intersection is great, but I wanted Triangle-Triangle intersection. I decided to use an algorithm from RosettaCode. The code I present here isn’t identical to what was presented there, though. Some adjustments have been made, as I prefer to avoid using explicit pointers to functions. My definition of Triangle and Point are expanded to accomodate the implementation’s use of indices to access the parts of the triangle. By unioning the two definitions, I can use either notation for accessing the members.

struct Point
{
	union {
		struct {
			float x;
			float y;
		};
		float pointList[2];
	};
};

struct Triangle
{
	union {
		struct {
			Point p1;
			Point p2;
			Point p3;
		};
		Point pointList[3];
	};	
};

The algorithm makes use of the determinate function, an operation from matrix math. I’m going to skip over explaining the concept of determinants. Explanation of matrix operations deserves its own post.

inline double Det2D(const Triangle& triangle)
{
	return +triangle.p1.x* (triangle.p2.y- triangle.p3.y)
		+ triangle.p2.x * (triangle.p3.y - triangle.p1.y)
		+ triangle.p3.x * (triangle.p1.y - triangle.p2.y);
}

Another key attribute of this implementation is that it optionally usually enforces the angles of a triangle being provided in a counter-clockwise order. If they are always passed in the same order, there is an opportunity for faster execution of the code. I chose general usability over speed. But I did mark the code with attributes to suggest that the compiler optimize for them being passed in that order.

void CheckTriWinding(const Triangle t, bool allowReversed = true)
{
	double detTri = Det2D(t);
	[[unlikely]]
	if (detTri < 0.0)
	{
		if (allowReversed)
		{
			Triangle tReverse = t;
			tReverse.p1 = t.p3;
			tReverse.p3 = t.p1;
			tReverse.p2 = t.p2;
			return CheckTriWinding(tReverse, false);
		}
		else throw std::runtime_error("triangle has wrong winding direction");
	}
}

The original algorithm had a couple of functions that were used to check for boundary collisions or ignore boundary collisions. I have expanded them from 2 functions to 4 functions. Whereas the original structured often as three points that are arbitrarily passed to a function, I prefer to pass triangles as a packaged structure. There is a place in the algorithm where a calculation is formed on a triangle formed by two points of one triangle and one point of the other triangle. The second forms of these functions accept points and will create a triangle from them.

bool BoundaryCollideChk(const Triangle& t, double eps)
{
	return Det2D(t) < eps;
}

bool BoundaryCollideChk(const Point& p1, const Point& p2, const Point& p3, double eps)
{
	Triangle t = { {{p1, p2, p3}} };
	return BoundaryCollideChk(t, eps);
}

bool BoundaryDoesntCollideChk(const Triangle& t, double eps)
{
	return Det2D(t) <= eps;
}

bool BoundaryDoesntCollideChk(const Point& p1, const Point& p2, const Point& p3, double eps)
{
	Triangle t = { {{p1, p2, p3}} };
	return BoundaryDoesntCollideChk(t, eps);
}

With all of those pieces in place, we can finally look at the part of the algorithm that returns true or false to indicate triangle overlap. The algorithm checks each point of one triangle to see if it is on the interior or external side of each edge of the triangle. If all of the points are on the external side, then no collision has been connected. Then it does the same check swapping the roles of the triangles.If any of the points are not external to the opposing triangle, then the triangles collide.

bool TriangleTriangleCollision(const Triangle& triangle1,
	const Triangle& triangle2,
	double eps = 0.0, bool allowReversed = true, bool onBoundary = true)
{
	//Trangles must be expressed anti-clockwise
	CheckTriWinding(triangle1, allowReversed);
	CheckTriWinding(triangle2, allowReversed);	

	//For edge E of trangle 1,
	for (int i = 0; i < 3; i++)
	{
		int j = (i + 1) % 3;
		[[likely]]
		if (onBoundary)
		{

			//Check all points of trangle 2 lay on the external side of the edge E. If
			//they do, the triangles do not collide.
			if (BoundaryCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[0], eps) &&
				BoundaryCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[1], eps) &&
				BoundaryCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[2], eps))
				return false;
		}
		else
		{
			if (BoundaryDoesntCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[0], eps) &&
				BoundaryDoesntCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[1], eps) &&
				BoundaryDoesntCollideChk(triangle1.pointList[i], triangle1.pointList[j], triangle2.pointList[2], eps))
				return false;
		}

		if (onBoundary)
		{

			//Check all points of trangle 2 lay on the external side of the edge E. If
			//they do, the triangles do not collide.
			if (BoundaryCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[0], eps) &&
				BoundaryCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[1], eps) &&
				BoundaryCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[2], eps))
				return false;
		}
		else
		{
			if (BoundaryDoesntCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[0], eps) &&
				BoundaryDoesntCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[1], eps) &&
				BoundaryDoesntCollideChk(triangle2.pointList[i], triangle2.pointList[j], triangle1.pointList[2], eps))
				return false;
		}
	}
	//The triangles collide
	return true;
}

The basic usage of this code is similar to the point collision. Pass the two triangles to the function

Triangle t1 = { Triangle{Point{3, 6}, Point{6, 5}, Point{6, 7} } },
			t2 = { Triangle{Point{4, 2}, Point{1, 5}, Point{6, 4} } },
			t3 = { Triangle{Point{3, 12}, Point{9, 8}, Point{9, 12} } },
			t4 = { Triangle{Point{3, 10}, Point{5, 9}, Point{5, 13} } };

auto collision1 = TriangleTriangleCollision(t1, t2);
std::wcout << "Triangles t1 and t2 " << (collision1 ? "do" : "do not") << " collide" << std::endl;
auto collision2 = TriangleTriangleCollision(t3, t4);
std::wcout << "Triangles t3 and t4 " << (collision2 ? "do" : "do not") << " collide" << std::endl;
auto collision3 = TriangleTriangleCollision(t1, t3);
std::wcout << "Triangles t1 and t3 " << (collision3 ? "do" : "do not") << " collide" << std::endl;

With that in place, I’m going to get back to writing the game. I can now check to see when actors in the game overlap.

Finding the Code

You can find the code in the GitHub repository at the following address.

https://github.com/j2inet/algorithm-math/blob/main/triangle-intersection/triangle-intersection.cpp


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

No, T-Mobile is Not Going to Fine Customers for Disliked SMS

Bottom Line: A rule affecting robo-texting (which is done without a phone) was mistaken for a rule governing/punishing users of T-Mobile phones. T-Mobile has not applied any new fines to customers for their SMS content.

Update 2023 December 31 11:58PM EST

Since I’ve made this posts, there’s been a number of other statements and posts about this misunderstanding that I think are worth considering. The Associated Press published an article about this misunderstanding/misinformation

“The change only impacts third-party messaging vendors that send commercial mass messaging campaigns for other businesses,” the company wrote in a statement emailed to The Associated Press.

Associated Press

There is now a community note under the account of one of the people that accounts for many views of this item of misinformation on Twitter/X.

…Bandwidth the company the user is referring does not provide consumer service. These specific terms of service are for commercial/enterprise users of the T-Mobile network. This does not directly apply to P2P (Peer to peer) messaging.

Twitter/X Community note

There was also a post in the T-Mobile Community forums by a user that received a response from a T-Mobile Community Manager.

These changes only apply to third-party messaging vendors that send commercial mass messaging campaigns for other businesses. The vendors will be fined if the content they are sending does not meet the standards in our code of conduct, which is in place to protect consumers from illegal or illicit content and aligns to federal and state laws. 

HeavenM, T-Mobile Community Manager

Original Post

I watched misunderstanding get legs and spread pretty far during the Christmas holiday weekend. The gist of the misinformation is that T-Mobile is going to fine customers hundreds or thousands of dollars for sending text messages of which T-Mobile doesn’t improved. This misunderstanding appears to be derived from a reading of a Business Code of Conduct, but without an understanding of the terms involved or the partnerships between various companies. I’ll try to explain both in a moment.

Consumer and Non-Consumer Messages

If you have a T-Mobile phone and are sending text messages, those are consumer text messages. If you start a marketing campaign to send out mass SMS messages, create a 2FA service that sends out messages, or are using some other API access, those are non-consumer messages. A basic understanding of how consumer messages works is common, but not so for non-consumer messages. Let’s dig into that a bit more.

Non-Consumer messages are often sent through an API. These messages are sometimes labeled as A2P messages, which means “Application to Person.” The origin of these messages is not a phone. T-Mobile, AT&T, and Verizon provide the ability to post messages into their infrastructure for delivery. How do you get access to this functionality? Generally, you don’t. They don’t let everyone have access to these services. Instead, there are a number of companies with which they establish agreements and have granted access. If you want to use these services, you would establish a connection with one of these companies and they manage many of the other details of what needs to occur. Here are some names of companies that provide such services.

  • Amazon Web Services
  • Bandwidth
  • BulkSMS
  • Call Hub
  • EZTexting
  • Hey Market
  • MessageBird
  • Phone.com
  • SalesMSG
  • Textr
  • Vonage
  • Wire2Air
  • Zixflow

There are a lot of other companies. You can find a larger list here. T-Mobile also has a document explaining the difference between consumer and non-consumer messaging.

As far as a consumer knows the messages are coming from a 10 digit phone number. These phone numbers may be referred to as A2P 10DLC (Application to Person 10 Digit Long Code). When a new messaging campaign is started, the number associated with the campaign must be registered. The number being registered and associated with an entity, brand, or campaign. Unlike short code messages, if a business also needs to allow customers to call them, they have the option of setting up a voice line associated with the same phone number.

The entities and phone numbers may also be assigned a trust rating. Entities that have a higher trust rating may be granted more throughput on a carrier’s network. If an entity’s trust rating become low, their granted capacity might be lowered or their messages may be disallowed all together. Entities earn a reputation.

What Are These Rules Restricting?

In a nutshell, the rules published by T-Mobile, AT&T, and Verizon disallow text messaging campaigns for unlawful material, including material that may be lawful in some states but not others, scam messages, spam, phishing attempts and impersonation. Some carriers may specifically call out other types of material, but the same general characterization of restrictions apply. The following is what was posted by Bandwidth about the notice (Update 2024 January 2: Bandwidth.com has since restricted viewing the notice. A screenshot of the notice can be viewed here).

T-Mobile is instituting three new fees for non-compliant A2P traffic sent by non-consumers that result in a Severity-0 violation. A Sev-0, (Severity-0) represents the most harmful violation to consumers and is the highest level of escalation with which a carrier will engage with Bandwidth. This applies to all commercial, non-consumer, A2P products (SMS or MMS Short Code, Toll-Free, and 10DLC) that traverse T-Mobile’s network.

With what I’ve shared so far, you may be able to recognize this as something that is not directed at regular customers that are using T-Mobile. The false information I encountered on the change misidentifies the affected partiers as subscribers of the phone service. Let’s dig into what a Sev-0 violation is.

  • Phishing messages that appear to come from reputable companies
  • Depictions of violenge, messages engaging in harassment, defamation, deception, and fraud
  • Adult Material
  • High-Risk content that generates a lot of user complaints, such as home offers, payday loans, and gambling content.
  • Sex, Hate, Alcohol, Firearms, and Tabaco related content (SHAFT)

There is other content in this category. The above isn’t exhaustive.

Bandwidth list as having the highest fees messages related to phishing and social engineering at 2,000 USD per violation. Second, at a 1,000 USD fine is unlawful content such as controlled substances or substances not lawful in all 50 states. The lowest fine that Bandwidth mentions is 500 USD is for violations of SHAFT or messages that don’t follow state or federal regulations.

Overall, this looks to be a move that may motivate A2P partners to make more efforts to filter out certain type of content that is at least generally annoying if not worst.

What About the Other Carriers?

In the USA there are three nation-wide carriers. AT&T, T-Mobile, and Verizon offer service across the nation. I don’t know if Verizon or AT&T have fines associated with violations, but they do have a code of conduct for A2P providers. If you’d like to read their code of conduct documentations, you can find them here.

There’s a lot of overlap in their rules. This may come as no surprise, especially to those familiar with CTIA. CTIA is a wireless trade organization representing carriers in the USA, supplies, and manufacturers of wireless products. They have been around since the mid-1980s. Current members of CTIA include AT&T, T-Mobile, Verizon, and US Cellular. You can find a list of members here.

There is a lot more that could be said about how these messaging services work. I may detail it further one day. But for now, the main take-away is that there’s a popular misconception about the changes that T-Mobile is said to be implementing that are recognized as a mistaken interpretation once one has a casual familiarity with some of the terms involved.


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Updating Christmas Gifts After they Are Sent

Christmas is around the corner. Among the items of interest this year is the Analogue Pocket. The Pocket is an FPGA based device that can hardware emulate a lot of older game consoles along with having some games of its own. I’m getting one prepared for someone else. But I also need to send the device soon to ensure that it arrives at its destination before Christmas. This creates a conflict with getting more games loaded while also shipping it on time. No worries, I can satisfy both if I send the device with something to upload the content.

This is done by a lot of physical game releases, when there is a zero-day “patch” for a game or when the disc is only a license for the game, but the actual game and it’s content are only available online. I’ll be shipping the memory card for the device with an call to action to run the “game installer” on the memory card. After the card is mailed, I can take care of preparing the actual image. The game installer will reach out to my website to find a list of files to download to the memory card, zip files to decompress, or folders to create.

Safety

Though I’m the only one that will be making payloads for my downloader to run, I still imagined some problem scenarios that I wanted to make impossible or more difficult. What if someone were to modify the download so that it were to target writing files to a system directory or some other location? I don’t want this to happen. I’ve made my downloader so that it can only write to the folder in which it lives and to subfolders. The characters that are needed to get to some parent level or to some other drive, if present in the download list, will intentionally cause the application to crash.

Describing an Asset

I started with describing the information that I would need to download an asset. An asset could be a file, a folder, or a zip file. I’ve got an enumeration for gflagging these types.

    public enum PayloadType
    {
        File,
        Folder,
        ZipFile,
    }

Each asset of this type (which I will call a “Payload” from hereon) can be described with the following structure.

    public class PayloadInformation
    {
        [JsonPropertyName("payloadType")]
        [JsonConverter(typeof(JsonStringEnumConverter))]
        public PayloadType PayloadType { get; set; } = PayloadType.File;

        [JsonPropertyName("fileURL")]
        public string FileURL { get; set; } = "";

        [JsonPropertyName("targetPath")]
        public string TargetPath { get; set; } = "";

    }

For files and zip archives, the FileURL property contains the URL to the source. The TargetPath property contains a relative path to where this payload item should be downloaded or unzipped to. A download set could have multiple assets. I broke up the files for the the device that I was sending into several Zip files. Sorry, but in the interest of not inundating my site with several people trying this out, I’m not exposing the actual URLs for the assets here. The application will be grabbing a collection of these PayloadInformation items.

    public class PayloadInformationList: List<PayloadInformation>
    {
        public PayloadInformationList() { }
    }

The list of assets is placed in a JSON file and made available on a web server.

[
  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Pocket.zip",
    "targetPath": "",
    "versionNumber": "0"
  },


  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_1.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_2.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_3.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },

  {
    "payloadType": "ZipFile",
    "fileURL": "https://myserver.com/Assets_4.zip",
    "targetPath": "Assets",
    "versionNumber": "0"
  },
  {
    "payloadType": "Folder",
    "targetPath": "Memories/Save States",
    "versionNumber": "1"
  },
  {
    "payloadType": "Folder",
    "targetPath": "Assets",
    "versionNumber": "1"
  }
]

I might use some form of this again someday. So I’ve placed the initial URL from which the download list is retrieved in the Application Settings. In the compiled application, the application settings are saved in a JSON file that can be altered with any text editor.

About the Interface

The user interface for this application is using WPF. I grabbed a set of base classes that I often use with WPF applications. It made this using a build of Visual Studio that was just released a month ago that contains significant updates. I found that my base class nolonger works as expected under this new version of Visual Studio. That’s something I will have to tackle another day, as I think that there is a change in the relationship between Linq Expressions and Member Expressions. For now, I just used a subset of the functionality that the classes offerd. Most of the work done by the application can be found in MainViewModel.cs.

To retrieve the list of assets, I have a method named GetPayload() that downloads the JSON containing the list of files and deserializing it. Though I would usually use JSON.Net for serialization needs, I used the System.Text.Json.Serializer for my needs. Here, I also check the paths for characters indicating an attempt to go outside of the application’s root directory and thrown an exception of this occurs.

async Task<List<PayloadInformation>> GetPayloadList()
{
    HttpClient client = new HttpClient();
    var response = await client.GetAsync(DownloadUrl);
    var stringContent = await response.Content.ReadAsStringAsync();
    var payloadList = JsonSerializer.Deserialize<List<PayloadInformation>>(stringContent);
    payloadList.ForEach(p =>
    {
        if (!String.IsNullOrEmpty(p.TargetPath))
        {
            if (p.TargetPath.Contains("..") || p.TargetPath.Contains(":") ||
                p.TargetPath.StartsWith("\\") || p.TargetPath.StartsWith("/")
            )
           {
               throw new Exception("Invalid Target Path");
            }
        }
    });
    return payloadList;
}

Within MainViewModel::DownloadRoutine() (which runs on a different thread) I step through the payload descriptions one at a time and take action for each one. For folder items, the application just creates the folder (and parent folders if needed). For files, the file is downloaded from the web source to a temporary file on the computer. After it is completely downloaded, it is moved to the final location. This reduces the chance of there being a partially downloaded file on the memory card. The process performed for Zip files is a variation of what is done for files. The zip file is downloaded to a temporary location, and then it is decompressed from that temporary location to its target folder.

while (_downloadQueue.Count > 0)
{
    Phase = "Downloading...";
    var payload = _downloadQueue.Dequeue();
    DownloadProgress = 0;
    CurrentPayload = payload;
    switch (payload.PayloadType)
    {
        case PayloadType.File:
            {
                Phase="Downloading";
                var response = client.GetAsync(payload.FileURL).Result;
                var content = response.Content.ReadAsByteArrayAsync().Result;
                var tempFilePath = Path.Combine(TempFolder, payload.TargetPath);
                var fileName = Path.GetFileName(payload.FileURL);
                File.WriteAllBytes(tempFilePath, content);
                File.Move(tempFilePath, payload.TargetPath, true);
            }
            break;
        case PayloadType.Folder:
            {
                Phase = "Creating Directory";
                var directoryName = payload.TargetPath.Replace('/', Path.DirectorySeparatorChar);
                var directoryInfo = new DirectoryInfo(directoryName);
                if (!directoryInfo.Exists)
                {
                    directoryInfo.Create();
                }
            }
            break;
        case PayloadType.ZipFile:
            {
                WebClient webClient = new WebClient();
                webClient.DownloadProgressChanged += DownloadProgressChanged;
                webClient.DownloadFileCompleted += WebClient_DownloadFileCompleted;
                var tempFilePath = Path.Combine(TempFolder, Path.GetTempFileName()) + ".zip";
                var fileName = Path.GetFileName(payload.FileURL);
                var directoryName = payload.TargetPath.Replace('/', Path.DirectorySeparatorChar);

                if (String.IsNullOrEmpty(directoryName))
                {
                    directoryName = ".";
                }
                var directoryInfo = new DirectoryInfo(directoryName);
                if (!directoryInfo.Exists)
                {
                    directoryInfo.Create();
                }
                webClient.DownloadFileAsync(new Uri(payload.FileURL), tempFilePath);
                _downloadCompleteWait.WaitOne();
                Phase = "Decompressing";
                System.IO.Compression.ZipFile.ExtractToDirectory(tempFilePath, directoryInfo.FullName,true);

            }
            break;
        default:
            break;
    }
}

Showing Progress

The download process can take a while. I thought it would be important to make known that the process was progressing. The primary item of feedback shown is a progress bar. As long as it is growing in size, it’s known that data is flowing. I used the WebClient::DownloadProgressChanged event to get updates on how much of a file has been downloaded and updating the progress bar accordingly.

void DownloadProgressChanged(Object sender, DownloadProgressChangedEventArgs e)
{
    // Displays the operation identifier, and the transfer progress.
    System.Diagnostics.Debug.WriteLine("{0}    downloaded {1} of {2} bytes. {3} % complete...", 
                        (string)e.UserState, e.BytesReceived,e.TotalBytesToReceive,e.ProgressPercentage);
    DownloadProgress = e.ProgressPercentage;
}

Handling Errors

Theres a good bit of error handling that is missing from this code. I made the decision to do this because of time. Ideally, the program would ensure that it has a connection to the server with the source files. This is different than checking whether there is an Internet connection. The computer having an Internet connection doesn’t imply that it has access to the files. Nor does having access to the files imply generally having access to the Internet. Having used a lot of restricted networks, I’m of the position that just making sure there is an Internet connection too possibly not be sufficient.

It is also possible for a download to be disrupted for a variety of reasons. In addition to detecting this, implementing download resumption would minimize the impact of such occurrences.

If I come back to this application again, I might first problem each of the reasources with an HTTP HEAD requests to see whether they are available. Such a requests would also make known the sizes of the files, which could be used to implement a progress bar for the total progress. Slow downloads, though not an error condition, could be interpreted as an error. Sufficiently informing the user of what’s going on can help prevent it from being thought of as such.

The Code

If you want to grab the code for this and use it for your own purposes, you can find it on GitHub.

https://github.com/j2inet/filedownloader


Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet

Iterating Maps in C++

Though I feel like it has become a bit of a niche language, I enjoy coding with C++. It was one of the earliest languages I learned while in grade school. In one of the projects I’m playing with now, I need to iterate though a map. I find the ways in which this has evolved over C++ versions to be interesting and wanted to show them for comparison. I’m using Visual C++ 2022 for my IDE. It supports up to C++ 20. Though it defaults to C++ 14.

Chaning the C++ Version

To try out the code that I’m showing here, you’ll need to know how to change the C++ version for your compiler. I’ll show how to do that with Visual C++. If you are using a different compiler, you’ll need to check your references. In a C++ project, right-click on the project from the Solutions Explorer and select “Properties.” From the tree of options on the left select Configuration Properties->C/C++->Language. On the right side, the option called C++ Language Standard will let you change the version. The options there at the time that I’m writing this are C++ 14 Standard, C++ 17 Standard, and C++ 20 Standard.

Examples on How to Iterate

A traditional way that you will see for iterating involves using the an iterator object for a map. If you look in existing C++ source code, you are likely to encounter this method since it has been available for a long time and is still supported in newer C++ versions. This follows the same pattern you will see for iterating through other Standard Template collections. Though its recognizable to those that use the Standard Template Library in general, it does use pointers which have some risks associated with them. Note that I am using the C++ 11 auto keyword for the compiler to infer the type and make this code more flexible.

for (auto map_iterator = shaderMap.being(); map_iterator != shaderMap.end(); map_iterator++)
{
     auto key = map_iterator->first;
     auto value = map_iterator->second;
}

A safer method would avoid the use of pointers all together. With this next version we get an object on which we can directly read the values. I use references to the item. In optimized compilers the reference ends up being purely notational and doesn’t result in an operation. I also think this looks cleaner than the previous example.

for(auto mapItem: shaderMap)
{
     auto& key = mapItem.first;
     auto& value = mapItem.second;
}

The last version that I’ll show works in C++ 17 and above. This makes use of structured bindings. In the for-loop declaration, we can name the fields that we wish to reference and have variables for accessing them. This is the method that I prefer. It generally looks cleaner.

for (auto const& [key, blob] : shaderMap)
{

}

Why not just show the “best” version?

Best is a bit subjective, and even then, it might not be available to every project. You might have a codebase that is using some other than the most recent version of the C++ language. Even if your environment does support changing the language, I wouldn’t select arbitrarily doing so. Though the language versions generally maintain backwards compatibility, changing the language is making a sweeping change where, for a complex project, could have unknown effects. If there is a productivity reason for making the change and the time/resources are available for fully testing the application, then proceeding might be worth considering for you. But I discourage giving into temptation to use the newest version only because it is newer.

Error Explanation: Microsoft C++ exception: Poco::NotFoundException

Working on a Direct3D 11 program, something wasn’t rendering correctly. I started to examine the debug output and came across some exceptions. These exceptions had nothing to do with my rendering error, but I wanted to know what was causing them.

Exception thrown at 0x00007FFE413FCF19 in D3DAppWindow.exe: Microsoft C++ exception: Poco::NotFoundException at memory location 0x0000000B3B5E27C0.
Exception thrown at 0x00007FFE413FCF19 in D3DAppWindow.exe: Microsoft C++ exception: Poco::NotFoundException at memory location 0x0000000B3B5E2800.

I traced this error back to my call to create a D3D11Device. To debug it any further, I’d have to start debugging code outside of what I wrote. The good news is if you are seeing this exception, it’s not your fault. You are likely using a NVIDIA video adapter. The bug is coming from it. The bad news is that there’s not anything that you can do about it at this moment. It’s up to NVIDIA to fix that. It may be helpful to provide information on which NVIDIA driver and OS version that you use on this NVIDIA thread.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet
Bluesky: j2inet.bsky.social

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.

Shared Handles in C++ on Win32

Shared pointers are objects in C++ that manage pointers. As a pointer to an object is passed around, copied, or deleted a shared pointer keeps track of how many references there are to the object that it refers to. When all references to the object are destroyed or go out of scope, the shared pointer will delete the object and free its memory. This has the effect of smart pointers in C++ acting almost like a managed memory environment. The burden on the developer to managming emory is pleasantly diminished.

The standard template library offers, among others, the class std::shared_ptr for creating shared pointers. There are some other classes, such as std::unique_ptr with special behaviours (in this case, ensuring that only one reference to the object exists). std::shared_ptr also lets the developer specify a custom delete for the object; if there is some specific behaviour needed for when an object is being deallocated, this feature could be used to support that. These are the signatures for some of the constructors that allow custom deleters

template< class Y, class Deleter> shared_ptr( Y* ptr, Deleter d );
template< class Deleter> shared_ptr( std::nullptr_t ptr, Deleter d );
template< class Y, class Deleter, class Alloc > shared_ptr( Y* ptr, Deleter d, Alloc alloc );
template< class Deleter, class Alloc> shared_ptr( std::nullptr_t ptr, Deleter d, Alloc alloc );
template< class Y, class Deleter> shared_ptr( std::unique_ptr<Y, Deleter>&& r );

Structures like this are not limited to being used only for pointers. They can be used for other resources too. My interest was in using them to manage handles for Windows objects, specificly handles. Handles are values that identify a system resource, such as a file. Their value is not for a memory address, but is a generally opaque numeric identifier. Think of it as an ID number. When the object that a handle refers to is nolonger needed, it should be freed with a call to CloseHandle().

I was working with a program written in C/C++ for Windows and writing a function to load the contents of a file. This is the original function.

vector<unsigned char> LoadFileContents(std::wstring sourceFileName)
{
    vector<unsigned char> retVal;
    auto hFile = CreateFile(sourceFileName.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile != INVALID_HANDLE_VALUE)
    {
        DWORD fileSize = GetFileSize(hFile, NULL);

        retVal.resize(fileSize);
        DWORD bytesRead;
        HRESULT result = ReadFile(hFile, retVal.data(), fileSize, &bytesRead, FALSE);
        CloseHandle(hFile);
    }
    return retVal;
}

Well, that’s not actually the original. In the original, I forgot to make the call to CloseHandle(). Forgetting to do this could lead to resource leaks in the program or the file not being available for writing later because a read handle is still open. For my end goal, this won’t be the only file that I use, nor will files be the only type of handles. I wanted to manage these in a safer way. Here, I use the std::unique_ptr to manage handles. I’ll make a custom deleter that will close a handle.

My custom deleter is implemented as a functor. A functor is a type of object that can be used as a function. Often these are used in callback operations. Functors, unlike typical functions, can also have state. In C++ functors are generally constructed by defining the operator() for the object. operator() can take any number of arguments. For my purposes, it only needs one argument. That’s the HANDLE to be closed. A HANDLE can have two values that indicate it isn’t referencing a value object. There is a constant, INVALID_HANDLE_VALUE (whose literal value is -1) and 0. To ensure CloseHandle() isn’t called on an invalid value, I need to check that the value passed is not either of these values and only call CloseHandle() if neither of these values was passed.

struct HANDLECloser
{
	void operator()(HANDLE handle) const
	{
		if (handle != INVALID_HANDLE_VALUE && handle != 0)
		{
			CloseHandle(handle);
		}
	}
};

Since there will only ever be one object accessing my file handles, I’ll be using std::unique_ptr for my file handles. With the above declaration I could begin using std::unique_ptr objects immediately.

auto myFileHandle = std::unique_ptr<void, HANDLECloser>(hFile);

That’s a lot to type though. In the interest of brevity, let’s make a declaration so that we can invoke that with less keystrokes.

using HANDLE_unique_ptr = std::unique_ptr<void, HANDLECloser);

With that in place, the previous call to initialize a unique pointer could be shortened to the following.

auto myFileHandle = HANDLE_unique_ptr(hFile);

That’s a bit more concise. Let’s add one more thing. Generally, I would be using this with the Win32 CreateFile function. Let’s make a CreateFileHandle() function that takes the same parameters as CreateFile but returns our std::unique_ptr for our file handle.

HANDLE_unique_ptr CreateFileHandle(std::wstring fileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile)
{
	HANDLE handle = CreateFile(fileName.c_str(), dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile);
	if (handle == INVALID_HANDLE_VALUE || handle == nullptr)
	{
		return nullptr;
	}
	return HANDLE_unique_ptr(handle);
}

Using these new classes that I’ve put in place,

vector<unsigned char> LoadFileContents(std::wstring sourceFileName)
{
    vector<unsigned char> retVal;
    auto hFile = CreateFileHandle(sourceFileName.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
    if (hFile)
    {
        DWORD fileSize = GetFileSize(hFile.get(), NULL);
        retVal.resize(fileSize);
        DWORD bytesRead;
        HRESULT result = ReadFile(hFile.get(), retVal.data(), fileSize, &bytesRead, FALSE);    
    }
    return retVal;
}

There are some other good bits of code in the project from which I took this code that I plan to share in the common weeks. Some parts are simple but useful, other parts are more complex. Come back in a couple of weeks for the next bit that I have to share.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet
Bluesky: j2inet.bsky.social

Posts may contain products with affiliate links. When you make purchases using these links, we receive a small commission at no extra cost to you. Thank you for your support.


Recompiling the V8 JavaScript Engine on Windows

I decided to compile the Google V8 JavaScript engine. Why? So that I could include it in another program. Google doesn’t distribute the binaries for V8, but they do make the source code available. Compiling it is, in my opinion, a bit complex. This isn’t a criticism. There are a lot of options for how V8 can be built. Rather than making available the permutations of these options for each version of V8, one could just set options themselves and build it for their platform of interest.

But Isn’t There Already Documentation on How to Do This?

There does exists documentation from Google on compiling Chrome. But there are variations from those instructions and what must actually be done. I found myself searching the Internet for a number of other issues that I encountered and made notes on what I had to do to get around compilation problems. The documentation comes close to what’s needed, but isn’t without error and deviation.

Setting Up Your Environment

Before touching the v8 source code, ensure that you have installed Microsoft Visual Studio. I am using Microsoft Visual Studio 2022 Community Edition. There are some additional components that must be installed. In an attempt to make this setup process as scriptable as possible, I’ve have a batch file that will have the Visual Studio Installer add the necessary components. If a component is already installed, no action is taken. Though the Google V8 instructions also offer a command to type to accomplish the same thing, this is where I encountered my first variation from their instructions. Their instructions assume that the name of the Visual Studio Installer command to be setup.exe (it probably was on a previous version of Visual Studio) where my installer is named vs_installer.exe. There were also additional parameters that I had to pass, possibly because I have more than one version of Visual Studio installed (Community Edition 2022, Preview Community Edition 2022, and a 2019 version).

pushd C:\Program Files (x86)\Microsoft Visual Studio\Installer\

vs_installer.exe --productid Microsoft.VisualStudio.Product.Community --ChannelId VisualStudio.17.Release --add Microsoft.VisualStudio.Workload.NativeDesktop  --add Microsoft.VisualStudio.Component.VC.ATLMFC  --add Microsoft.VisualStudio.Component.VC.Tools.ARM64 --add Microsoft.VisualStudio.Component.VC.MFC.ARM64 --add Microsoft.VisualStudio.Component.Windows10SDK.20348 --includeRecommended

popd

You may need to make adjustments if your installer is located in a different path.

While those components are installing, let’s get the code downloaded and put int place. I did the download and unpacking from powershell. All of the commands that follow were stored in a power shell script. Scripting the process makes it more repeatable and is easier to document (since the scripts are also a record of what was done). You do not have to use the same file paths that I do. But if you change them, you will need to make adjustments to the instructions when one of these paths is used.

I generally avoid placing folders directly in the root. The one exception to that being a folder I make called c:\shares. There’s a structure that I conform to when placing this folder on Windows machines. For this structure, Google’s code will be placed in subdirectories of c:\shares\projects\google. In the following script you’ll see that path used.

$depot_tools_source = "https://storage.googleapis.com/chrome-infra/depot_tools.zip"
$depot_tools_download_folder= "C:\shares\projects\google\temp\"
$depot_tools_download_path = $depot_tools_download_folder + "depot_tools.zip"
$depot_tools_path = "c:\shares\projects\google\depot_tools\"
$chromium_checkout_path = "c:\shares\projects\google\chromium"
$v8_checkout_path = "c:\shares\projects\google\"

mkdir $depot_tools_download_folder
mkdir $depot_tools_path
mkdir $chromium_checkout_path
mkdir $v8_checkout_path

pushd "C:\Program Files (x86)\Microsoft Visual Studio\Installer\"
.\vs_installer.exe install --productID Microsoft.VisualStudio.Product.Community --ChannelId VisualStudio.17.Release --add Microsoft.VisualStudio.Workload.NativeDesktop  --add Microsoft.VisualStudio.Component.VC.ATLMFC  --add Microsoft.VisualStudio.Component.VC.Tools.ARM64 --add Microsoft.VisualStudio.Component.VC.MFC.ARM64 --add Microsoft.VisualStudio.Component.Windows10SDK.20348 --includeRecommended
popd

Invoke-WebRequest -Uri $depot_tools_source -OutFile $depot_tools_download_path
Expand-Archive -LiteralPath $depot_tools_download_path -DestinationPath $depot_tools_path

After this script completes running, Visual Studio should have the necessary components and the V8/Chrome development tools are downloaded and in place.

There are some environment variables on which the build process is dependent. These variables could be set within batch files, could be set to be part of the environment for an instance of the command terminal, or set at the system level. I chose to set them at the system level. This was not my first approach. I set them at more local levels initially. But several times when I needed to open a new command terminal, I forgot to apply them, and just found it easier to set them globally.

ENVIRONMENT VARIABLEVALUE
DEPOT_TOOLS_WIN_TOOLCHAIN0
vs2022_installC:\Program Files\Microsoft Visual Studio\2022\Community
PATHc:\shares\projects\google\depot_tools\;%PATH%
Environment Variables that must be set

From here on, we will be using the command prompt, and not PowerShell. This is because some of the commands that are part of Google’s tools are batch files that only run properly in the command prompt.

From the command terminal, run the command gclient. This will initialize the Google Tools. Next, navigate to the folder in which you want the v8 code to download. For me, this will be c:\shares\projects\google. The download process will automatically make a subfolder named v8. Run the following command.

fetch v8 --no-history

This command can take a while to complete. After it completes you will have a new directory namd v8 that contains the source code. Navigate to that directory.

cd v8

The online documentation that I see from Google for v8 is for verion 9. I wanted to compiled version 12.0.174.

git checkout 12.0.174

Today I am trying to only rebuild v8 for Windows. Eventually I’ll rebuild it for ARM64 also. Run the following commands. It will make the build directories and configurations for different targets.

python3 .\tools\dev\v8gen.py x64.release
python3 .\tools\dev\v8gen.py x64.debug
python3 .\tools\dev\v8gen.py arm64.release
python3 .\tools\dev\v8gen.py arm64.debug

The build arguments for each environment are in a file named args.gn. Let’s update the configuration for the x64 debug build. To open the build configuration, type the following.

notepad out.gn\x64.debug\args.gn

This will open the configuration in notepad. Replace the contents with the following.

is_debug = true
target_cpu = "x64"
v8_enable_backtrace = true
v8_enable_slow_dchecks = true
v8_optimized_debug = false
v8_monolithic = true
v8_use_external_startup_data = false
is_component_build = false
is_clang = false

Chances are the only difference between the above and the initial version of the file are from the line v8_monolithic onwards. Save the file. You are ready to start your build. To kick off the build, use the following command.

ninja -C out.gn\x64.debug v8_monolith

This will also take a while to run, but this will fail. There is a third party component that will fail concerning a line in a file named fmtable.cpp. You’ll have to alter a function to fix the problem. Open the file in the path .\v8\third_party\icu\source\i18n\fmtable.cpp. Around line 59, you will find the following code.

static inline UBool objectEquals(const UObject* a, const UObject* b) {
     // LATER: return *a == *b
     return *((const Measure*)a) == ((const Measure*)b);
}

You’ll need to change it so that it contains the following.

static inline UBool objectEquals(const UObject* a, const UObject* b) {
     // LATER: return *a == *b
     return *((const Measure*)a) == *b;
}

Save the file, and run the build command again. While that’s running, go find something else to do. Have a meal, fly a kit, read a book. You’ve got time. When you return, the buld should have been successful.

Hello World

Now, let’s make a hellow world program. Google already has a v8 hellow would example that we can use to see that our build was successful. We will use it for now, as I’ve not discussed anything about the v8 object library yet. Open Microsoft Visual Studio and create a new C++ Console application. Replace te code in the cpp file that it provides with Google’s code.


// Copyright 2015 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include "libplatform/libplatform.h"
#include "v8-context.h"
#include "v8-initialization.h"
#include "v8-isolate.h"
#include "v8-local-handle.h"
#include "v8-primitive.h"
#include "v8-script.h"

int main(int argc, char* argv[]) {
    // Initialize V8.
    v8::V8::InitializeICUDefaultLocation(argv[0]);
    v8::V8::InitializeExternalStartupData(argv[0]);
    std::unique_ptr<v8::Platform> platform = v8::platform::NewDefaultPlatform();
    v8::V8::InitializePlatform(platform.get());
    v8::V8::Initialize();

    // Create a new Isolate and make it the current one.
    v8::Isolate::CreateParams create_params;
    create_params.array_buffer_allocator =
        v8::ArrayBuffer::Allocator::NewDefaultAllocator();
    v8::Isolate* isolate = v8::Isolate::New(create_params);
    {
        v8::Isolate::Scope isolate_scope(isolate);

        // Create a stack-allocated handle scope.
        v8::HandleScope handle_scope(isolate);

        // Create a new context.
        v8::Local<v8::Context> context = v8::Context::New(isolate);

        // Enter the context for compiling and running the hello world script.
        v8::Context::Scope context_scope(context);

        {
            // Create a string containing the JavaScript source code.
            v8::Local<v8::String> source =
                v8::String::NewFromUtf8Literal(isolate, "'Hello' + ', World!'");

            // Compile the source code.
            v8::Local<v8::Script> script =
                v8::Script::Compile(context, source).ToLocalChecked();

            // Run the script to get the result.
            v8::Local<v8::Value> result = script->Run(context).ToLocalChecked();

            // Convert the result to an UTF8 string and print it.
            v8::String::Utf8Value utf8(isolate, result);
            printf("%s\n", *utf8);
        }

        {
            // Use the JavaScript API to generate a WebAssembly module.
            //
            // |bytes| contains the binary format for the following module:
            //
            //     (func (export "add") (param i32 i32) (result i32)
            //       get_local 0
            //       get_local 1
            //       i32.add)
            //
            const char csource[] = R"(
        let bytes = new Uint8Array([
          0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00, 0x01, 0x07, 0x01,
          0x60, 0x02, 0x7f, 0x7f, 0x01, 0x7f, 0x03, 0x02, 0x01, 0x00, 0x07,
          0x07, 0x01, 0x03, 0x61, 0x64, 0x64, 0x00, 0x00, 0x0a, 0x09, 0x01,
          0x07, 0x00, 0x20, 0x00, 0x20, 0x01, 0x6a, 0x0b
        ]);
        let module = new WebAssembly.Module(bytes);
        let instance = new WebAssembly.Instance(module);
        instance.exports.add(3, 4);
      )";

            // Create a string containing the JavaScript source code.
            v8::Local<v8::String> source =
                v8::String::NewFromUtf8Literal(isolate, csource);

            // Compile the source code.
            v8::Local<v8::Script> script =
                v8::Script::Compile(context, source).ToLocalChecked();

            // Run the script to get the result.
            v8::Local<v8::Value> result = script->Run(context).ToLocalChecked();

            // Convert the result to a uint32 and print it.
            uint32_t number = result->Uint32Value(context).ToChecked();
            printf("3 + 4 = %u\n", number);
        }
    }

    // Dispose the isolate and tear down V8.
    isolate->Dispose();
    v8::V8::Dispose();
    v8::V8::DisposePlatform();
    delete create_params.array_buffer_allocator;
    return 0;
}

If you try to build this now, it will fail. You need to do some configuration. Here is a quick list of the configuration changes. If you don’t understand what to do with these, that’s find. I’ll will walk you through applying them.

VC++ Directories : 
	Include : v8\include
	Library Directories<Debug>: v8\out.gn\x64.debug\obj
	Library Directories<Release>: v8\out.gn\x64.release\obj

C/C++
	Code Generation
		Runtime Library <Debug>: /MTd
		Runtime Library <Release> /Mt
	Preprocessors
		VV8_ENABLE_SANDBOX;V8_COMPRESS_POINTERS;_ITERATOR_DEBUG_LEVEL=0;
		
Linker
	Input
		Additional Dependencies: v8_monolith.lib;dbghelp.lib;Winmm.lib;

Right-click on the project file and select “Properties.” From the pane on the left, select VC++ Directories. In the drop-down on the top, select All Configurations. On the right there is a field named Include. Select it, and add the full path to your v8\include directory. For me, this will be c:\shares\projects\google\v8\include. If you build in a different path, it will be different for you. After adding the value, select Apply. You will generally want to press Apply after each field that you’ve changed.

Change the Configuration drop-down at the top to Debug. In the Library Directories entry, add the full path to your v8\out.gn\x64.debug\obj folder and click Apple. Change the Configuration dropdown to Release and in Library Directories add the full path to your v8\out\gn\x64.release\obj folder.

From the pane on the left, expand C/C++ and select Code Generation. On the right, set the Debug value for Runtime Library to /MTd and set the Release value for the field to /Mt.

Change the Configurations option to All and set add the following values to Preprocessors

V8_ENABLE_SANDBOX;V8_COMPRESS_POINTERS;_ITERATOR_DEBUG_LEVEL=0;

Keep the Configurations option on ALL. Expand Linker and select Input. For Additional Dependencies enter v8_monolith.lib;dbghelp.lib;Winmm.lib;

With that entered, press Okay. You should now be able to run the program. It will pass some values to the JavaScript engine to execute and print out the values.

What’s Next

My next set of objectives is to demonstrate how to project a C++ object into JavaScript. I also want to start thinning out the size of these files. On a machine that is using the v8 binaries, the entire build tools are not needed. At the end of the above process the b8 folder has 12 gigs of files. If you copy out only the build files and headers needed for other projects, the file size is reduced to 3 gigs. Further reductions could occur through changing some of the compilation options.


Mastodon: @j2inet@masto.ai
Instagram: @j2inet
Facebook: @j2inet
YouTube: @j2inet
Telegram: j2inet
Twitter: @j2inet