This is a guest post to celebrate International Women in Engineering Day on June 23rd. Written by Alexandra Martinez Rodriguez, Senior UX Researcher, MathWorks and Ruth Faherty, Advanced Support... read more >>

]]>This May, the Women's Affinity Group at MathWorks - Northern Europe chapter hosted a breakfast mixer that served as a platform for empowerment, networking, and insightful discussions within the tech community. The affinity group aims to promote a diverse and inclusive environment to nurture the growth and development of Women at MathWorks, enable a sense of belonging, and recognize the importance of intersectionality, global communities, allyship, and transparency. Specifically, we help building a community and foster networking and relationship-building across our membership.

Attendees to this Breakfast Mixer event comprised of members and allies from the Women at MathWorks Affinity group, based in Cambridge, as well as staff from Roku and Featurespace. The event started with a networking breakfast, followed by a panel discussion on how women can support other women in their organisations and networks, and concluded with an opportunity for further networking.

*Attendees networking and drinking coffee*

The day kicked off with an atmosphere brimming with enthusiasm. As attendees arrived, they were greeted with a spread of breakfast options and refreshments, setting a casual and inviting tone for the event.

*Attendees networking and eating breakfast*

At the beginning of the event, participants started to congregate around the various coffee machines. This was a wonderful opportunity to initiate conversations with members from different companies. In between exchanges of laughter and discussions, we delved into topics such as the representation of women within our respective companies and teams, future opportunities, and how events like this foster a supportive environment. We even shared tips on how to brew a better cup of coffee using the machines at hand! These interactions helped break the ice and paved the way to getting to know each other.

After everyone filled their plates, poured their coffee, and had a chat, we had the opportunity to listen to a panel discussion about how women and allies can support other women within their organizations and networks. We touched on mentorship, career development, and overcoming challenges. Moderated by Janet Macmillan (MathWorks), the panel featured Deborah Ferreira (MathWorks), Denise Cannadine (Roku), and Melina Jonkisz (Featurespace), offering perspectives from their varied roles and experiences across the three companies.

*Panelists left to right: Deborah Ferreira (MathWorks), Melina Jonkisz (Featurespace), Denise Cannadine (Roku), Janet Macmillan (MathWorks)*

A key takeaway from the discussion was the agreement on the importance of mentorship in shaping careers. Whether through formal schemes, like the pilot program at MathWorks, or more informal arrangements, the panelists all reflected that having a mentor had been an essential element in their professional growth. The conversation also touched upon the significance of self-belief and putting yourself forward for new opportunities. Panelists shared insights into how women can support each other in professional settings, emphasizing the impact of encouragement and representation to create a supportive community.

*Attendees listening to the panel discussion*

The event concluded with more networking, allowing participants to reflect on the insights shared and discuss their experiences. This gathering was not just an opportunity to form new connections but also a moment to learn from one another. We hope this is the first of many similar events!

]]>Zhaoxu Liu / slandarer is a winner of the 2023 MATLAB Mini Hack contest and an active contributor to the File Exchange and Discussions. Zhaoxu’s creative Mini Hack contest entries amazed and... read more >>

]]>I am an undergraduate student at the Ocean University of China, majoring in Information and Computational Science. I am about to pursue a postgraduate degree in the UK.

I particularly enjoy constructing mathematical models and am keen on practical applications. Therefore, I plan to pursue a master’s degree in Applied Mathematics.

I first encountered MATLAB during a series of practical courses in my freshman year at university, such as Mathematical Modeling Practice. I particularly remember when the teacher of the basic mathematical experiment course mentioned that if we could use MATLAB to design a small game, present it in class, and explain the principles, we would receive full marks for our regular grades. So, I worked hard to look up various ways to do this. A week later, I presented a Go game in class. Although some of the Go's forbidden move rules were not integrated into the code, it marked the beginning of my journey with MATLAB. After this code presentation, my classmates remembered me and started inviting me to participate in projects and competitions involving programming. During this process, I tried writing code in different fields, such as participating in a key project in Shandong Province for the classification of flow cytometer data and simulating the heat conduction of chips inside a reflow oven in the National College Student Mathematical Modeling Competition. My coding skills gradually improved through these challenges. Therefore, I would tend to say that various experiences in mathematical modeling had a significant impact on my MATLAB coding.

As for the challenges I encountered, I believe it lies in the need to learn new things in modeling,sla understand formulas, and translate them into MATLAB language.

Initially, I only documented some convenient codes for myself on CSDN. As my followers gradually grew to tens of thousands, I decided to update the code I wrote on various platforms.

During the sharing process, I also learned from code written and shared by others. Some articles mentioned the amazing works of various experts in the previous two Mini Hack contests, which deeply impressed me. Besides sharing some self-written utility functions, I also write some dynamically interesting code. This coincides with the theme of the new Flipbook Mini Hack contest, "interesting dynamic images." Therefore, I was determined to participate in this contest.

In my entries, the dynamic image rose bouquet was suggested by Josh for me to modify and submit my File Exchange file for the competition. It took a lot of effort to compress it to the required character limit, but it received a lot of comments and recognition. Additionally, particle heart 2 has been remixed a whopping 9 times, and I am proud of these two entries.

As for the most interesting entries in the competition, I have to mention the large number of real and stunning dynamic images provided by @Tim. The impression of the dynamic image Moonrun is particularly profound. After the competition, the official MathWorks engineer @Adam Danz provided a very detailed explanation in his tweet Creating natural textures with power-law noise: clouds, terrains, and more which was extremely beneficial.

About the 2024 Mini Hack contest, if we continue to create animated images for the contest, I hope that participants can choose the number of frames within a given limit. This would enable creators to express a more complete narrative.

In addition, I hope that apart from the regular submission channel for works, several interesting themes can be introduced, allowing participants to submit their work separately for each theme with individual rewards. Furthermore, these themes don't necessarily have to be nouns (such as holidays or skyscrapers) but could be more abstract concepts, such as peace, stark contrast, elegance, warmth, or coldness.

After sharing my code, many readers use it with different versions of MATLAB, providing valuable feedback to help improve my code. Additionally, in the comments section, people often mention more efficient functions or coding practices that I may not have encountered, which can further streamline and enhance my code. As I mentioned in a previous response, for a long time, I only shared code on CSDN that I found convenient for myself. After deciding to publish my code on other Chinese platforms, I spent a considerable amount of time sharing content exclusively in Chinese. During this period, MATLAB enthusiasts and users from around the world asked me about code details and usage methods. I realized that sharing this on File Exchange could help even more people. This is one of the main reasons I decided to begin translating and sharing parts of my code comments into English on the File Exchange. Of course, another crucial reason is that uploading to File Exchange allows for better version control of the code.

I am someone who enjoys creating tools for myself. In terms of color selection for plotting, I have developed a 200 colormap and 2000 palettes tool to assist in color selection. Additionally, due to the extensive variety of color palettes, I have designed a color selection interface using App Designer, which aids in filtering for similar color schemes. Before creating plots, I study the details of plots and color schemes in various top-tier journals, attempting to apply these techniques.

As for advice for MATLAB beginners, the most useful suggestion would be to spend time exploring the MathWorks official website. The documentation there is incredibly detailed, outlining the necessary toolboxes for each field, the functions to learn, the purpose of each function's properties, and providing ample examples, sometimes even accompanied by instructional videos. Another valuable piece of advice, which extends beyond MATLAB, is the need for continuous coding to enrich and enhance one's skills. When facing complex projects, it's beneficial to break them down into modules and implement their functionalities step by step.

Well, my hobbies are indeed quite diverse. Apart from table tennis, most of them lean towards skill-based activities, such as Chinese painting, crystallography, and origami. The inspiration for my fun code for a rose ball actually came from a rose ball I once folded in origami, while a a piece of code for a crystal heart stemmed from my experiments in crystallography.

]]>My erstwhile colleague Steve Eddins recently retired from MathWorks after a long and illustrious career. And once he was finally free of the office, what did he long to do? Maybe a few rounds of golf... read more >>

]]>I know all this because Steve has already set up a personal MATLAB blog: Matrix Values.

I was especially interested in his second post, Initialize a MATLAB Toolbox. In it, he describes the process of first adapting, then automating our toolbox design guidelines. You may remember a post about those guidelines that we did last year: What Is a Toolbox? New Guidelines for Authors.

Take a look at Steve's post. He does a nice job summarizing the toolbox design document and then talking you through the parts that he finds valuable as well as the parts he doesn't need to worry about. And since he is a self-avowed automation junkie, he wasn't going to stop until he wrote some tooling to help him out. Steve puts it like this:

As I learned during my MathWorks career, a set of recommended practices is much more likely to take hold if it can be automated.

Amen to that. And I have good news! You too can take advantage of his helpful automation, because he's sharing it with the world in general and you in particular. You can find the tool he wrote, inittbx, on the File Exchange and GitHub.

Steve uses many of our guidelines, and also throws in a few wrinkles of his own. I like the checklist file, CHECKLIST.md. He says it "reminds me of all the steps needed to complete the job." It's a lighter weight approach than opening a bunch of issues for yourself.

So how does it work? Well, I'm about to start work on my magnum opus, a Sudoku solver called the Sudokunator. All I have to do is ...

`> inittbx("Sudokunator")`

and I'm on my way with beautiful standards-compliant folders and files, all ready to go. Now all I have to do is, you know, write the darn code.

So thanks, Steve, for making yourself useful even as you ride off into the sunset!

]]>Recently I've seen a number of fractal images created with Newton's method. For instance, Daniel Pereira had two spectacular animations in the recent Flipbook Mini-Hack contest. Rotating Newton's... read more >>

]]>Here's another example of the kind of picture I'm talking about.

How is such an image made? What does it represent? It's one of those things where complexity is hiding behind something that, on its face, seems straightforward.

To motivate this, consider this question: How do you find the roots of this polynomial?

x^{3} - 2·x^{2} - 11·x + 12.

That is, for what values of x is this equation true? x^{3} - 2·x^{2} - 11·x + 12 = 0

There are a number of ways to answer this question. If you’re good at factoring polynomials, you might recognize that this is (x-4)(x-1)(x+3). Another answer is to use a little MATLAB magic. MATLAB represents polynomials as vectors containing the coefficients ordered by descending powers. Here is that same polynomial in vector form.

>> p = [1 -2 -11 12]

Once in the right form, there’s a command ROOTS that will answer our question directly.

>> roots(p) ans = -3.0000 4.0000 1.0000

But let's say that your polynomial won't factor so easily, and you don't have a computer handy. Let’s imagine you're Isaac Newton and you want to find the roots of this polynomial. Now what?

As it happens, Newton is credited with a nifty iterative method for finding the roots in a situation like this. You start with a guess, and you use the slope of the curve at that point to make a better guess. Keep repeating this process until you get as close as you want to zero.

Let's start with a guess at -1.9.

x=-1.9000, f(x)=18.8210 x=-4.4331, f(x)=-65.6622 x=-3.4335, f(x)=-14.2877 x=-3.0585, f(x)=-1.6770 x=-3.0013, f(x)=-0.0364 x=-3.0000, f(x)=-0.0000

If we start at -1.9, we end up finding the root at -3 in around six iterations. But wait! There are three roots to this polynomial. It turns out that different starting locations will converge to different roots. So for every starting guess, there is a corresponding root. This is where things get interesting.

This iterative process maps an initial guess to a root. You can think of the resulting map as being divided into basins. Different basins drain into different roots. We can color these basins according to the root to which that starting point will eventually converge. Additionally, we'll keep track of how many iterations it takes to converge (that is, to reach the same tolerance). The curve in the top graph is colored according its corresponding root. Each colored region represents a different basin.

Curiously, these basins aren't always well defined. As we zoom into the boundaries between two basins, we keep finding smaller and smaller regions that converge to different roots. This back-and-forth convergence continues no matter how much you zoom in. This phenomenon is sometimes called a fractal basin boundary.

Now that we've demonstrated this for the real number line, you may wonder: can we generalize to the complex plane? Yes we can, because MATLAB is awesome! Since complex numbers are a native data type, you can use exactly the same code as in the one dimensional case.

So let's examine a more complicated polynomial. This one has complex roots. We'll use the process shown above to find the root basins on the complex plane.

What are the roots of this polynomial?

8·x^{7} + x^{6} + 3·x^{5} + 7·x^{4} + 2·x^{3} + 4·x^{2} + 5·x + 6

Since it is a seventh order polynomial, there are seven roots to this equation. Every point on the complex plane maps to one of them. So every single pixel in this plot is the result of Newton's method. But the borders between these regions are demented fractals.

Look how simple the code is! There's only one line right in the middle that's doing all the heavy lifting.

p = [8 1 3 7 2 4 5 6]; % Make anonymous functions for the polynomial and its derivative. f = @(x) polyval(p,x); df = @(x) polyval(polyder(p),x); span = linspace(-1.5,1.5,300); [x,y] = meshgrid(span,span); z = x + 1i*y; % Apply Newton's method for k = 1:50 z = z - f(z)./df(z); end % Use ANGLE as a proxy to display the root imagesc(span,span,angle(z)) axis square % Show the roots hold on plot(roots(p), ... LineStyle="none", ... Marker="pentagram", ... MarkerFaceColor="red", ... MarkerSize=18) hold off

And how many iterations did it take at each point to converge to that root? That's the picture we started with at the top of the post.

Now that you know the basics, it’s easy to make some eye-popping animations. Here's one of my own additions to the recent Flipbook contest.

If you want to learn more, take a look at Cleve's post on this topic: Fractal Global Behavior of Newton’s Method.

]]>The MATLAB AI Chat Playground is ready for you to experiment with Generative AI, answer questions, and write initial draft MATLAB® code. All of us on the community team share a guiding principle. We... read more >>

]]>All of us on the community team share a guiding principle. We wake up every day thinking about how we can help our users accelerate their pace of learning and building. ChatGPT, Generative AI, and large-language model (LLM) technology have been sweeping forces, and we see this amazing technology as a tool that, if applied correctly, can help users find answers, get initial drafts of code, and help break down complex engineering problems.

We spent this year listening to you, experimenting with Generative AI ourselves, and building a space for our users to experiment. We have been working with our top community users and refining the experience, and I am thrilled to announce that we are now opening the MATLAB AI Chat Playground to all users.

To get access right now, go to the MATLAB Central community and select AI Chat Playground.

The AI Chat Playground is optimized to assist with MATLAB-related input, including toolboxes for deep learning, statistics and machine learning, optimization, control systems, and signal processing. Currently, the system has limited knowledge of Simulink and other MathWorks products.

Open the AI Chat Playground in a desktop browser.

The playground features a chat panel next to a lightweight MATLAB code editor. Use the chat panel to enter natural language prompts to return explanations and code. You can keep chatting with the AI to refine the results or make changes to the output.

Try one of the onboarding prompts if you don’t know where to start.

The AI replies with an explanation, some code, and follow-up prompts to keep the conversation going. At any point, you can enter your own input and explore an endless number of directions. Every time I use the playground, I learn about features of MATLAB, ways to interface with functions, or new concepts. I am always picking up new things.

If you want to try the MATLAB code, select the insert icon next to the code and then click the run button. This will execute the code and show the results. If you want to copy the code and bring it into the MATLAB editor or Visual Studio Code, select the copy button.

Please rate the output of the AI with the thumbs up or thumbs down buttons. This is an important signal for us to figure out how to improve the output and refine the experience over time.

Here are some use cases for the playground that others have found helpful.

A good place to start is to ask basic technical computing questions.

- What's an eigenvalue?
- How do I customize a line plot?

If you don’t have a specific question, you can also use the AI Chat Playground to generate ideas on how to do something.

- Give me some ideas on how to visualize large amounts of rainfall data.
- My data is noisy. What can I do to fix it?

A powerful way to use the playground is to get MATLAB code. Try entering some high-level instructions.

- Create some data, construct a grid of query points, interpolate the grid, and plot the results.
- Roll two six-sided dice 1000 times and plot the sum of each roll.

I look forward to seeing what you learn and build with the MATLAB AI Chat Playground. Please share your experiences here and in the Generative AI discussion area of the MATLAB Central community. It’s important that we inspire each other and collectively learn how to harness this powerful technology. *We are all in this together.*

Today we're launching a new contest here on MATLAB Central: the MATLAB Flipbook Mini Hack. Why flipbook? It's an animation contest in which each animation gets exactly 48 frames, so it's like... read more >>

]]>Today we're launching a new contest here on MATLAB Central: the MATLAB Flipbook Mini Hack. Why flipbook? It's an animation contest in which each animation gets exactly 48 frames, so it's like flipping through a 48-page pad of paper with slightly different drawings on each page. Check it out: entries are already pouring in!

In this post, I want to demonstrate some coding techniques that can make your animations easier. Suppose, for example, you wanted to animate a dot moving in a circle. Here the variable f is the frame number. It will take on the values from 1 to 48.

f = 1:48;

We want to use f to help us make an dot move in a circle. As f increments up to 48, we'll make the variable theta go from 0 to 2 pi. A perfect job for INTERP1! The INTERP1 function is a convenient way to do a linear map from one domain to another.

theta = interp1([0 48],[0 2*pi],f);

plot(f,theta,'.-',MarkerSize=12)

Let's pretty up this plot.

xlim([1 48])

ylim([0 2*pi])

set(gca,XTick=0:8:48)

set(gca,YTick=pi*(0:0.5:2),YTickLabel={"0","\pi/2","\pi","3\pi/2","2\pi"})

xlabel("Frame Number")

ylabel("Theta")

grid on

This shows you how theta varies across all frames. It's all pretty straightforward, but I just love how INTERP1 washes away the hassle of working out the linear mapping.

Now it's time to build one frame of our animation. So instead of looking at all values of f, we'll just consider one. Appropriately, we'll start with frame 1.

f = 1;

Now we use our interpolation trick to map frame number into theta.

theta = interp1([0 48],[0 2*pi],f);

With theta defined, it's time to plot our moving dot. A little trig will put the dot in the right place.

plot(cos(theta),sin(theta),".", ...

MarkerSize=24)

Not much to look at so far! Let's make a circular race track that doesn't move so we can follow the action more clearly.

t = linspace(0,2*pi);

line(cos(t),sin(t))

And we'll square up the axis and reset the limits so the dot isn't bumping into the edges.

axis square

axis([-2 2 -2 2])

Now we have the code that we put into the contest entry page. This code will create any of the frames 1 through 48 on demand. When you push the "Create Animation" button, the contest machinery does the rest of the magic. It generates all 48 frames in succession and stitches them into a single GIF image.

Here is the result.

There you have it: a two-second 48-frame movie masterpiece! I call it "Cutting a Hole in the Wall with a Blue Laser". Or "Blue Polar Bear Circles the North Pole". What would you call it?

That was a simple way to get started, but I feel we can do better. Let's do something more interesting with this circular motion.

First set up a landscape.

% Set up a landscape

peaks

shading flat

material dull

axis vis3d off

Nice! Now we're going to create a light and move it around above this landscape. We can use our interpolation trick.

% Just showing frame 1 here

f = 1;

% Map onto theta

theta = interp1([0 48],[0 2*pi],f);

% Set up the light

ht = 4*sin(theta) + 4;

pos = [2*cos(theta) 2*sin(theta) ht];

light(Position=pos)

% The actual light is invisible, so we'll mark its position with a yellow dot

line(pos(1),pos(2),pos(3), ...

Marker=".",MarkerSize=24,Color="y")

set(gcf,Color=0.8*[1 1 1])

Now as f changes, the light orbits around the landscape in three dimensions. Here's the final animation. Pretty cool, eh? It's certainly better than our first animation.

I have found this contest to be a lot of fun, and very addictive. Maybe you will to. I encourage you to give it a try. Now that you know the INTERP1 trick, you have the secret to making your images dance.

]]>I like to play Cody because I always learn something. After I've solved a problem, I spend a little time with the Solution Map looking at other solutions. It doesn't take long to find something... read more >>

]]>For instance, have you ever heard of the function ISSORTED? It's a simple function, but it was new to me. Let me tell you how I first learned about it. Along the way, I'll tell you about a new trick for finding good Cody solutions.

Let's consider Cody Problem 10, which is about monotonically increasing vectors. Here's the problem statement: Return true if each element in the input vector is larger than its predecessor.

At the moment I write this, Problem 10 has over 67,319 solutions. Quick! Which one is the best? It's a daunting request. But the best solution out of a set that big is probably pretty good. Certainly worth a look. But which one is it?

We can start with a simple distinction: of all these solutions, only 20,813 are correct. So that's progress! The best solution is likely to be in this subset. Which of these correct answers is the best? Cody uses a metric, called "size," to rank the solutions. A solution's size is the number of nodes in the parse tree for a given solution, and lower is considered better. It's a useful metric, but far from perfect. We all know that the shortest solutions aren't always the best solutions. Indeed, because of some weird hacks that people have worked out (*cough cough*, regexp), the shortest solution is often a terrible solution in real life.

If size isn't a great metric, can we come up with a better one? In general, it's difficult, maybe impossible, to come up with an automated system that can figure out which solution is the best. What criteria are you using? Size? Speed? Efficiency? Readability? And how do you weight each of these? Would you sacrifice readability for the sake of speed, or vice versa? Everybody has a different notion of goodness.

Ultimately, what most of us want to know is this: *Which solution do most people prefer?* It's a utilitarian answer that relies on the wisdom of the crowd. We're acknowledging that judging code is subjective, and we're asking our friends and neighbors to apply their subjectivity, biases and all. This turns out to work pretty well.

Now that we've decided to make a subjective judgment system, we have to work through some practical issues. How do you get people to give fair and distinct scores to thousands of solutions? Here is the key insight: whereas it's difficult to grade individual entries consistently, it is generally straightforward to compare two entries and say which one is better. And once we've built up enough binary comparisons, we can use statistics to work out scores for all the solutions. Here we rely on techniques (the Bradley-Terry algorithm, in this case) developed for ranking players in sports and games. Think about how a collection of chess or tennis matches can be aggregated into one global ranking.

A few months ago, Cody added a crowd-scoring feature based on this head-to-head comparison approach. Now we can tell you not only which code is the smallest, but also which code is preferred by the people who are playing.

Let's look at a set of solutions to Problem 10 that have all been ranked. Here is a histogram by size. The median size is 37.

Things get more interesting when we plot crowd score vs. size. Crowd score is what we're calling the subjective ranking applied by players like you. This number is the output of the Bradley-Terry algorithm, and it's been normalized here so that the best crowd score is 100.

There is an obvious skew to the distribution: preferred solutions tend to be shorter.

We should note here that the great majority of solutions have not been given a crowd score yet, so we can't expect the results to be comprehensive and consistent. Similar entries may receive different ratings. Nevertheless, the plot is instructive. It extracts good entries from the vast amount of background noise.

The red arrow indicates the current crowd favorite:

In English we might say: in all cases, the difference between two successive elements should be positive. Compact and easy to read! This use of DIFF is a nice example of vectorized code. You don't need a loop to go through all the elements of x. But you can if you want to. Here is a longer, less preferred solution.

It still gets the correct answer, but the code is longer (and arguably, harder to read) because it explicitly loops through the vector x.

Here is a plot that distinguishes between those solutions that use the vectorized DIFF approach and those that use the looping LENGTH approach. Vectorization leads to shorter, and generally more preferred code.

There are many variants on the DIFF theme. For instance, this one is slightly more complicated than the leader.

Here's one that is still vectorized, but it does the DIFF calculation explicitly.

There are some other intriguing entries from the list of preferred solutions. I like this one. Here the function UNIQUE is enforcing the problem criteria: if the vector is already in the order that would be returned by UNIQUE, then you're good.

And finally, here's the one from farther down the list that took me by surprise.

I didn't even know about the ISSORTED function, but it's been around for a long time, and it offers the shortest, most direct solution to the problem.

Which one of these is your favorite, and why?

I love solution tourism in Cody. Where else can you be exposed to so many novel functions and solution methods? Even so, the full list of Cody solutions can feel like an impenetrable jungle. Where do you start? That's why we introduced solution voting: to find the treasures hidden in the thicket.

And if you like this feature, remember that it's only as good as the votes going into it. Please spend a little time voting on answers yourself. It only takes a little effort, and everybody wins!

]]>Today's guest article is by Wesley Hamilton, a STEM Outreach engineer here at MathWorks. See his earlier piece on solving Sudoku puzzles. In this blog post we'll explore the fascinating world of... read more >>

]]>In this blog post we'll explore the fascinating world of continued fractions, like $ \sqrt{2} = 1 + \frac{1}{2 + \frac{1}{2 + \frac{1}{2 + \cdots}}} $. While these objects have been studied for several centuries, their presence in modern math classes is relatively rare (unless you're studying number theory, probably). One of the historical uses of these objects has been computing rational approximations of irrational numbers, such as the 5th century approximations for π of $ \frac{22}{7} $ and $ \frac{355}{113} $ by Chinese astronomer Zu Chongzhi. Here, we'll focus on something even less prevalent: their computational aspects. The goal is to start exploring statistical properties of continued fractions, generating conjectures, and then pointing to several topics that can serve as computational playgrounds to get one's hands dirty and explore even more.

Before defining what a continued fraction even is, we'll start by defining a few "important" numbers to use as examples throughout. Next we'll define continued fractions and compute some by hand, before introducing the computational algorithm to compute these things. Afterwards we'll explore their statistical properties, before discussing some conjectures and neat facts, and round out the discussion by including links to other interesting computational topics to explore.

Table of Contents

So that we have a rich class of examples, let's set up a few "important" numbers to use throughout the rest of this blog. Some of the numbers are important in some area(s) of math, though the whole collection should give us a nice, diverse set of examples to explore with. Also some of the names used are likely not standard, but describe how the numbers are constructed fairly well.

First, let's set up some parameters that'll determine the precision we'll be using throughout, along with a random number seed to ensure reproducibility no matter which machine (and when) these examples are run.

d = 10000; %set the computational precision "very" high

rng(1); %set the seed for the random number generator

And with that, let's set up our collection of numbers. For those interested in learning about each number, check out the code block at the end of this post. If you're content with knowing these are just being used as examples, feel free to skip seeing the definitions.

[num1, num2, num3, num4, num5, num6] = generateExamples(d);

So what exactly is a continued fraction? As the name implies, a continued fraction is a fraction that continues. What are fractions that don't continue? Most fractions you might think of, such as $ \frac{11}{13} $.

This isn't the only way to write this fraction though. One way, using Egyptian fractions, is $ \frac{11}{13} = \frac{1}{2} + \frac{1}{3} + \frac{1}{78} $, which we can verify using MATLAB's Symbolic Math Toolbox (which will also be used significantly later on):

eFrac = sym(1/2) + sym(1/3) + sym(1/78)

Another way is $ \frac{11}{13} = \frac{1}{1 + \frac{1}{ 5 + \frac{1}{2}}} $. We can check this just as we did above:

cFrac = 1/(1 + 1/(5 + sym(1/2)))

This is what we call a finite (simple) continued fraction: continued fraction because it continues, finite because there are finitely many terms, and simple because the numerators that appear are all 1. We also note that all of the coefficients we want should be positive integers; variations with negative integers also exist and the resulting continued fractions will have different properties.

This begs two natural questions:

- Are there continued fractions that have infinitely many terms?
- Are there continued fractions with numerators other than 1?

Yes and yes, which we'll demonstrate by example:

- The golden ratio $ \phi = \frac{1 + \sqrt{5}}{2} = 1+ \frac{1}{1 + \frac{1}{1 + \cdots}} \approx 1.618... $,
- $ \frac{4}{\pi} = 1 + \frac{1^2}{2 + \frac{3^2}{2 + \frac{5^2}{2 + \cdots}}} \approx 1.2732... $.

In this post we'll focus on infinite continued fractions, which we'll refer to as continued fractions for simplicity. Likewise, we won't touch on non-simple continued fractions, but rest assured they exist and often express interesting patterns and are fascinating mathematical objects to explore; for simplicity, whenever we say continued fraction we're talking about a simple continued fraction.

So how does one compute the continued fraction for a number? The general algorithm is a sort of Euclid's algorithm, wherein we take off the integer portion of a number, invert the fraction, and repeat. Let's see how this works with $ \frac{11}{13} $:

- $ \frac{11}{13} $ has no integer part, so we have $ \frac{11}{13} = 0 + \frac{11}{13} $.
- Next we invert the fractional part of the last expression, giving $ \frac{11}{13} = 0 + \frac{1}{\frac{13}{11}} $.
- For the newest "denominator", we repeat the process of taking out an integer part: since $ \frac{13}{11} = 1 + \frac{2}{11} $, we get $ \frac{11}{13} = 0 + \frac{1}{1 + \frac{2}{11}} $.
- Having isolated the integer part, we now invert the new fractional part $ \frac{2}{11} $, giving $ \frac{11}{13} = 0 + \frac{1}{1 + \frac{1}{\frac{11}{2}}} $.
- Since $ \frac{11}{2} = 5 + \frac{1}{2} $, we substitute this in to get the continued fraction $ \frac{11}{13} = 0 + \frac{1}{1 + \frac{1}{5 + \frac{1}{2}}} $.

It shouldn't be too surprising that any finite continued fraction corresponds to a rational number (just apply the algorithm backwards from the end of the continued fraction), but it may not be obvious that every rational number has a finite continued fraction. As a comparison, the decimal representation of $ \frac{1}{2} $ is $ 0.5 $, while the decimal representation of $ \frac{1}{3} $ is $ 0.333... $. It turns out that for continued fractions, there will be infinitely many coefficients if the number is irrational, and likewise if the number is irrational its continued fraction will have infinitely coefficients.

We've already seen one example of an infinite continued fraction, $ \phi = 1+ \frac{1}{1 + \frac{1}{1 + \cdots}} $. There are two ways to find the coefficients: one involves the algorithm from above, while the other leverages the fact that ϕ is a solution to $ 0 = \phi^2 - \phi - 1 $. Let's see both methods in action:

- Using the approximation $ \phi =1.618033988749895 $, we get $ \phi = 1 +0.618033988749895 $. Inverting the fraction gives $ \phi = 1 + \frac{1}{1.618033988749894} $, since $ \frac{1}{0.618033988749895} \approx 1.618033988749894 $. Notice that this new denominator is incredibly close to our original approximation for ϕ, and only differs in the last digit we've recorded. Continuing on, we get $ \phi = 1 + \frac{1}{1 + \frac{1}{\frac{1}{0.618033988749894}}} $, etc.
- The second method doesn't rely on approximations, but includes some of the observations in the last method. In particular, we saw that $ \frac{1}{\phi - 1} \approx \phi $, which upon rearranging gives $ 1 = \phi^2 - \phi $, which is precisely the equation $ 0 = \phi^2 - \phi - 1 $. What we'll do now is start with the quadratic equation for ϕ and rearrange it to get us the continued fraction fairly easily. So, starting with $ 0 = \phi^2 - \phi - 1 $, we rearrange to get $ \phi^2 = \phi + 1 $, and divide by ϕ to end up with $ \phi = \frac{\phi + 1}{\phi} $. Cleaning up the right-hand side of this last equality gives $ \phi = 1 + \frac{1}{\phi} $. What this expression suggests is that, since we have $ \phi = $ something, let's plug that something in to the left-hand side: $ \phi = 1+ \frac{1}{1 + \frac{1}{\phi}} $. Repeating this again gives $ \phi =1 + \frac{1}{1 + \frac{1}{1+\frac{1}{\phi}}} $, and so on.

This second method can be used for a lot of roots of quadratic equations, though for other irrational numbers there may or may not be some nice properties we can exploit. Before discussing how the general algorithm works in general, and implementing it in MATLAB, we'll see one more example of finding a continued fraction.

Let's find the continued fraction for $ \sqrt{2} $. First, recall that $ \sqrt{2} $ is a solution to the quadratic equation $ x^2 - 2 = 0 $. We can try to write this in the same form that worked for ϕ, namely $ x = \frac{2}{x} $, but this won't actually get us any closer. Instead, we're going to add 0 to $ \sqrt{2} $ and hope for the best: $ \sqrt{2} = \sqrt{2} + 1 - 1 = 1 + \left( \sqrt{2} - 1\right) $. This step is the first step of the general algorithm, since $ \sqrt{2} \approx 1.414 $, so we've successfully isolated the integer part. Next we invert the fractional part to get $ \frac{1}{\sqrt{2}-1} = \frac{\sqrt{2}+1}{(\sqrt{2}-1)(\sqrt{2}+1)} = \sqrt{2} + 1 $. Thus, our continued fraction should look like $ \sqrt{2} = 1 + \frac{1}{1 + \sqrt{2} } $. At this point we have the recurrence we need, and we can keep substituting $ \sqrt{2} $ back in on the left-hand side to get $ \sqrt{2} = 1 + \frac{1}{1 + 1 + \frac{1}{1 + \sqrt{2}} } = 1 + \frac{1}{2 + \frac{1}{1 + \sqrt{2}}} = 1 + \frac{1}{2 + \frac{1}{2 + \frac{1}{ 1 + \sqrt{2}}}} $, and so on.

As you may have noticed, writing out a continued fraction can get fairly unwieldly. Since we're working with simple continued fractions, the only coefficients we have to keep track of are the "integer pieces" in each level of the continued fraction. For $ \sqrt{2} $ the integer piece is 1, but all of the subsequent coefficients are 2. So that we don't have to write out $ \sqrt{2} = 1 + \frac{1}{ 2 + \frac{1}{ 2 + \frac{1}{2 + \cdots} } } $ each time, we'll use the condensed notation $ \sqrt{2} = [1; 2,2,...] $. The semicolon just indicates that the first coefficient is "special" since it's the actual integer part. As another example, $ \phi =1 + \frac{1}{1 + \frac{1}{1+\frac{1}{1 + \cdots}}} = [1; 1, 1, ...] $.

As a third example, $ \pi = [3; 7, 15, 1, 292, 1, 1,...] = 3 + \frac{1}{7 + \frac{1}{15 + \frac{1}{1 + \frac{1}{292 + \frac{1}{1 + \cdots}}}}} $. Next, we'll implement the algorithm for computing continued fractions and verify this last representation.

More generally, if $ a_0, a_1, a_2, ... $ are positive integers, then the continued fraction $ a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \cdots}} $ is written $ [a_0; a_1, a_2, ...] $.

As a quick reminder, the algorithm to find the coefficients for a continued fraction involves iteratively (1) taking out the integer part of a quantity, (2) inverting the fractional part, and repeating this indefinitely; in the case of using a computer, we repeat this process until our level of precision is met or exceeded. Let's see what this looks for ϕ and π.

As a test case to verify the algorithm, and its accuracy, we'll start with ϕ's continued fraction coefficients. Recall that $ \phi = [1; 1, 1, ...] $, so as soon as the algorithm returns a coefficient that is not 1, we'll know that's around the limitation of what we can accurately compute.

phi = vpa((1 + sqrt(sym(5)))/2,d) % define phi as a symbolic variable

tempR = phi; % tempR is the number we have at the latest stage of the algorithm

numTerms = 100; % let's start with 1000 coefficients computed

%initialize an array of zeros so we have space allocated at the start for

%everything

A = zeros(1,numTerms);

%the main for loop where we compute the coefficients

for ii = 1:numTerms

%compute the integer piece of wherever we're at in the process

A(ii) = floor(tempR); % floor finds the nearest integer below tempR

%set up to get the next integer piece, etc.

tempR = 1/(tempR - floor(tempR)); % remove the integer piece and invert the fraction

end

% let's see what was computed

A

The 96th coefficient is 2, not 1, so with this precision we're getting ~100 coefficients accurately. While this may not seem like too many coefficients, using the continued fraction with these 95 coefficients would let us compute ϕ to within about 40 digits. This follow's from Loch's theorem, since for ϕ it takes 2.39 continued fraction coefficients to approximate 1 digit of accuracy for ϕ.

Next, let's do the same with $ \pi = [3; 7, 15, 1, 292, 1, 1,...] $.

symPi = vpa(sym(pi),d)

tempR = symPi; % tempR is the number we have at the latest stage of the algorithm

numTerms = 100; % let's start with 100 coefficients computed

%initialize an array of zeros so we have space allocated at the start for

%everything

A = zeros(1,numTerms);

%the main for loop where we compute the coefficients

for ii = 1:numTerms

%compute the integer piece of wherever we're at in the process

A(ii) = floor(tempR); % floor finds the nearest integer below tempR

%set up to get the next integer piece, etc.

tempR = 1/(tempR - floor(tempR)); % remove the integer piece and invert the fraction

end

% let's see what was computed

A

Just as expected! In case there's any doubt, the continued fraction coefficients for π can be externally verified using the Online Encyclopedia of Integer Sequences as A001203.

Since we'll use the algorithm over and over, it's been encoded in the function getCF available at the end of this post.

We can also visualize these coefficients using bar graphs, where the bars are only used a quick graphical aid to get a sense of the distribution, and in particular how large, the coefficients can get. Here we're using a higher precision than above since we're going to start going big with the number of coefficients we compute.

piCF = getCF(vpa(sym(pi),d),100);

% first 10 coefficients

bar(piCF(1:10))

title("First 10 continued fraction coefficients of \pi")

%first 100 coefficients

bar(piCF)

title("First 100 continued fraction coefficients of \pi")

Before getting to the core of this blog post - statistical properties of continued fraction coefficients - let's see what some of the other numbers we defined at the start of this post look like.

tic

num1CF = getCF(num1,100);

toc

bar(num1CF)

title("First 100 continued fraction coefficients of the Fibonacci constant")

tic

num2CF = getCF(num2,100);

toc

bar(num2CF)

title("First 100 continued fraction coefficients of Champernowne's constant")

tic

num3CF = getCF(num3,100);

toc

bar(num3CF)

title("First 100 continued fraction coefficients of the Prime constant")

tic

num4CF = getCF(num4,100);

toc

bar(num4CF)

title("First 100 continued fraction coefficients of the Squares constant")

tic

num5CF = getCF(num5,100);

toc

bar(num5CF)

title("First 100 continued fraction coefficients of Liouville's constant")

tic

num6CF = getCF(num6,100);

toc

bar(num6CF)

title("First 100 continued fraction coefficients of a random decimal")

Some of the coefficients get quite large, but most tend to stay closer (relatively speaking) to 1. While many of these coefficients may seem random, they have some pretty remarkable statistical properties as we'll explore next.

Since we've mentioned statistics, let's start by taking means of the coefficients and seeing what pops out. In particular, since we've been constructing continued fractions one coefficient at a time, we're going to take running means of the coefficients and see what their long-term behaviour is. Recall that the mean of numbers $ a_1, a_2, ..., a_n $ is $ \frac{a_1 + \cdots + a_n}{n} $.

We'll start by considering the running means of π's coefficients.

Means = zeros(1,100); % initialize an array to hold the means

for ii = 1:100

Means(ii) = mean(piCF(1:ii));

end

Means

Let's visualize these results with a line plot:

plot(Means)

The plot suggests that the means are tending towards the value 10, and inspecting the array of means tells us that the actual, apparent, limiting value is about 8. Let's do this same procedure for the other continued fractions we computed to use as examples and plot them all on the same graph.

Means = zeros(7,100); % initialize an array to hold the means

% this time 7 rows since we have pi and 6 other examples

% add the means for pi

for ii = 1:100

Means(1,ii) = mean(piCF(1:ii));

end

% add the means for the Fibonacci constant

for ii = 1:100

Means(2,ii) = mean(num1CF(1:ii));

end

% add the means for Champernowne's constant

for ii = 1:100

Means(3,ii) = mean(num2CF(1:ii));

end

% add the means for the Prime constant

for ii = 1:100

Means(4,ii) = mean(num3CF(1:ii));

end

% add the means for the Squares constant

for ii = 1:100

Means(5,ii) = mean(num4CF(1:ii));

end

% add the means for Liouville's constant

for ii = 1:100

Means(6,ii) = mean(num5CF(1:ii));

end

% add the means for random decimal

for ii = 1:100

Means(7,ii) = mean(num6CF(1:ii));

end

% plot all lines together

plot(Means')

ylim([0 100])

We can generate even more examples, and evidence for conjectures, by constructing more random decimals and plotting their running means.

numExperiments = 100; % the number of random numbers to generate

Means = zeros(numExperiments,100); % initialize an array to hold the means

tic % to time the computation

for jj = 1:numExperiments

%copying the code from above that generates a random decimal

tempDigits = randi(10,[1 d]) - 1; %generate an array of randomly selected digits between 1 to 9

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

randDec = vpa("0."+temp,d);

% compute its CF

randCF = getCF(randDec,100);

% compute its running means

for ii = 1:100

Means(jj,ii) = mean(randCF(1:ii));

end

end

toc % display how much time elapsed for the data generation

plot(Means')

ylim([0 100])

For most of the continued fractions, the means seem to tend to a value between 5 and 20, and all seem to be greater than 2.5. This is interesting, and with more coefficients we may generate more evidence for this observation - we may even generate a

Conjecture: the running means of (most) continued fractions coefficients is at least 2.5

(and prove it, later on).

Here though we'll pivot, and instead of exploring means, we'll explore geometric means of the coefficients. For numbers $ a_1, a_2, ..., a_n $, their geometric mean is the quantity $ (a_1a_2 \cdots a_n)^{1/n} $.

Let's start by computing the running geometric means of the continued fraction coefficients of π.

Geo = ones(1,99); % initialize the array to store geometric means

% we're taking 1 less than the size of the CF since we're ignoring the

% initial integer part

%the main for loop, where we loop through and compute the geometric means

%of the sequences

for ii = 2:length(piCF)

Geo(ii-1) = prod(nthroot(piCF(2:ii),ii)); %note the indexing

end

Geo

plot(Geo)

The coefficients seem to converge to a value just below 3. Let's see what the other example numbers indicate.

Geo = ones(7,99); % initialize an array to hold the geometric means

% this time 7 rows since we have pi and 6 other examples

% add the geometric means for pi

for ii = 1:100

Geo(1,ii) = prod(nthroot(piCF(2:ii),ii));

end

% add the geometric means for the Fibonacci constant

for ii = 1:100

Geo(2,ii) = prod(nthroot(num1CF(2:ii),ii));

end

% add the geometric means for Champernowne's constant

for ii = 1:100

Geo(3,ii) = prod(nthroot(num2CF(2:ii),ii));

end

% add the geometric means for the Prime constant

for ii = 1:100

Geo(4,ii) = prod(nthroot(num3CF(2:ii),ii));

end

% add the geometric means for the Squares constant

for ii = 1:100

Geo(5,ii) = prod(nthroot(num4CF(2:ii),ii));

end

% add the geometric means for Liouville's constant

for ii = 1:100

Geo(6,ii) = prod(nthroot(num5CF(2:ii),ii));

end

% add the geometric means for random decimal

for ii = 1:100

Geo(7,ii) = prod(nthroot(num6CF(2:ii),ii));

end

% plot all lines together

plot(Geo')

ylim([0 10])

We can also run the same experiment as above, generating a bunch of random decimals and seeing what their running geometric means are.

numExperiments = 100; % the number of random numbers to generate

Geo = zeros(numExperiments,99); % initialize an array to hold the geometric means

tic % to time the computation

for jj = 1:numExperiments

%copying the code from above that generates a random decimal

tempDigits = randi(10,[1 d]) - 1; %generate an array of randomly selected digits between 1 to 9

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

randDec = vpa("0."+temp,d);

% compute its CF

randCF = getCF(randDec,100);

% compute its running geometric means

for ii = 2:100

Geo(jj,ii-1) = prod(nthroot(randCF(2:ii),ii));

end

end

toc % display how much time elapsed for the data generation

plot(Geo')

ylim([0 10])

Remarkably, the running geometric means tend towards a value between 2 and 3 much more noticeably than the running means did. Of course with more continued fraction coefficients we'd get more data, and hopefully stronger evidence, for a conjecture, like

Conjecture: the running geometric means of (most) continued fraction coefficients converge to 2.5.

The value 2.5 was chosen as the visual mean of the lines in our experiments. So the question remains: is our conjecture true?

Unfortunately it's not actually true. The actual value is $ \approx 2.68545... $

This result is known as Khinchin's constant, and was actually conjectured and proven in the 1930s. What's even wilder is the exact value is known, and is the infinite product$ \prod_{r=1}^\infty \left( 1 + \frac{1}{r(r+2)}\right)^{\log_2 r} $. The proof of this goes well beyond this post and involves more advanced mathematical techniques developed to study dynamical systems, but the linked Wikipedia article is a great starting point for intrepid explorers curious to learn more.

How does this connect to the exploration of means? Well, the AM-GM inequality tells us that the arithmetic mean (the usual mean) is always greater than or equal to the geometric mean of the same set of numbers. Since Khinchin's constant is what most running geometric means converge to, the running (arithmetic) means of continued fraction coefficients should converge to values greater than Khinchin's constant.

This discussion ignored many (arguably) more standard topics when it comes to continued fractions, like:

- convergents, or what you get when you truncate a continued fraction after finitely many terms,
- rates of convergence for convergents, or why $ \frac{22}{7} $ and $ \frac{355}{113} $ are excellent approximations of π,
- the fact that continued fractions are the "best" rational approximants for numbers,
- MATLAB's built-in rational approximation tools, which can find continued fractions with possibly negative exponents,
- Loch's theorem, which informs how many coefficients (or levels) of a continued fraction you need for another digit of accuracy (~1 coefficient per digit),
- how roots of quadratic polynomials have periodic continued fractions (the coefficients repeat), and are the only kinds of irrational numbers with periodic coefficients,

and so on. Indeed, when looking at convergents a result similar to Khinchin's constant exists called Levy's constant.

There even exist (non-simple) continued fraction expansions for some well-known functions, like

$ \sin(x) = \frac{x}{1 + \frac{x^2}{2\cdot 3 - x^2 + \frac{2\cdot 3 x^2}{4\cdot 5 - x^2 + \frac{4\cdot 5 x^2}{6\cdot 7 - x^2 + ...}} } } $. How this expression is obtained goes far beyond the discussion here, but hopefully serves as motivation to keep exploring these fascinating objects (the Wikipedia page on Euler's continued fraction might be a good starting point).

Happy exploring!

function [num1, num2, num3, num4, num5, num6] = generateExamples(d)

%%%%%%%%%%

%Number - Fibonacci's Constant

%0.1123581321..., constructed by concatenating the Fibonacci numbers 1, 1, 2, 3, 5...

numTerms = 70; %set the number of terms in the Fibonacci sequence to use

tempDigits = zeros(1,numTerms); %generate an empty array to store the first d terms of the Fibonacci sequence

tempDigits(1) = 1;%initialize the initial condition

tempDigits(2) = 1;%initialize the initial condition

%the main for loop where we fill in the Fib. sequence

for ii = 3:numTerms

tempDigits(ii) = tempDigits(ii-1) + tempDigits(ii-2);

end

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

num1 = vpa("0."+temp,d); % tell MATLAB to use Variable Precision Arithmetic (vpa) for high precision

%%%%%%%%%%

%Number 2 - Champernowne's Constant

%0.1234567891011..., constructed by concatenating the digits 1, 2, 3, ...

tempDigits = 1:d; %generate an array of the first d digits

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

num2 = vpa("0."+temp,d); % tell MATLAB to use Variable Precision Arithmetic (vpa) for high precision

%%%%%%%%%%

%Number 3 - Prime Constant

%0.235711..., formed by concatenating the prime numbers 2, 3, 5, 7, 11...

tempDigits = primes(d); %generate an array of the primes below d

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

num3 = vpa("0."+temp,d); % tell MATLAB to use Variable Precision Arithmetic (vpa) for high precision

%%%%%%%%%%

%Number 4 - Squares Concatenated Constant

%0.1491625..., formed by concatenating the integers squared 1, 4, 9, 16...

tempDigits = 1:d; %generate an array of the first d digits

tempDigits = tempDigits.^2; %square each of the terms in the array (hence the .^)

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

num4 = vpa("0."+temp,d); % tell MATLAB to use Variable Precision Arithmetic (vpa) for high precision

%%%%%%%%%%

%Number 5 - Liouville's Constant

%0.1100010000000000000000010... formed by adding a 1 in the decimal place and zeros elsewhere.

numTerms = 8; %number of factorial terms to use

LiouvilleIndeces = factorial(1:numTerms); %first identify where the ones should go

tempDigits = zeros(1,max(LiouvilleIndeces)+1); %generate an array of the first d digits

tempDigits(LiouvilleIndeces) = 1; %square each of the terms in the array (hence the .^)

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

num5 = vpa("0."+temp,d); % tell MATLAB to use Variable Precision Arithmetic (vpa) for high precision

%%%%%%%%%%

% Number 6 - Random Decimal

% A number whose decimal digits are all uniformly chosen between

tempDigits = randi(10,[1 d]) - 1; %generate an array of randomly selected digits between 1 to 9

temp = num2str(tempDigits); %convert the array to a string

temp = strrep(temp," ",""); %remove empty space in the string

num6 = vpa("0."+temp,d); % tell MATLAB to use Variable Precision Arithmetic (vpa) for high precision

end

function A = getCF(r,numTerms)

% get the continued fraction coefficients

%

% the input r is the number we want the CF of

% numTerms is however many coefficients we want to compute

%

tempR = r; % tempR is the number we have at the latest stage of the algorithm

% if numTerms isn't specified, default to computing 10 coefficients

if ~exist('numTerms','var')

numTerms = 10;

end

%initialize an array of zeros so we have space allocated at the start for

%everything

A = zeros(1,numTerms);

%the main for loop where we compute the coefficients

for ii = 1:numTerms

%compute the integer piece of wherever we're at in the process

A(ii) = floor(tempR);

%set up to get the next integer piece, etc.

tempR = 1/(tempR - floor(tempR)); % remove the integer piece and invert the fraction

end

end

The File Exchange provides access to more than 46,000 free MATLAB projects. For 33,854 of these projects, the authors included the word "toolbox" in the title. "Toolbox" is clearly a useful... read more >>

]]>Let's try to clarify things here. Consider the words project and toolbox. What is the difference?

A project is the entirety of the thing that you work on. These days, it often lives in a GitHub repository (so "project" and "repo" are sometimes used interchangeably). A toolbox is the thing you distribute to people who will then install and use it. So toolboxes derive from projects, but in general they aren't the same thing. A simple working project might be identical with its distributable toolbox. But as you add tests and other supporting material that won't ship with the toolbox, the project becomes the larger containing entity.

Here's how I sometimes picture it: the project is the garage and the toolbox is the car.

Now let me put on top of this picture some of the related words we use.

That's the basic idea of what a toolbox is. But let's zoom in a little. What should a modern standard toolbox look like?

For years we've been fielding requests about what constitutes a good toolbox. People want to know: How should I organize my toolbox code? How can I avoid the common pitfalls of distributing code? How can I encourage others to collaborate with me?

To answer these questions, we recently rolled out guidance on MATLAB Toolbox Best Practices. You can read all the details as published in this GitHub repo:

**mathworks/toolboxdesign: Best practices for creating MATLAB toolboxes.**

Or if you prefer, you can browse through this simple example instance of a standard toolbox, the Arithmetic Toolbox.

**mathworks/arithmetic: Example toolbox created to showcase MATLAB Toolbox Best Practices.**

Here is a short list of things we recommend.

- use GitHub
- include a README
- make it a MATLAB Project
- put all the "working code" into a toolbox/ subfolder
- package with MLTBX files
- create examples with Live Scripts (MLX files)
- create tests that can be run using GitHub Actions

We don't claim you have to do all these things in order to be a good person. These are conventions that we recommend. If you follow this advice (or even just some of it), it will be easier for others to find and use your code. The tools you create will have more reach and more impact.

I'll close by observing that this is an early version of a living standard, so you can expect it to evolve over time. In fact, we hope you will help us evolve it by opening issues or participating in discussions. We hope it will be useful to you as you make and share tools in MATLAB.

]]>Today's guest article is by Hans Scharler. Live Interview on Discord Recently, I had the privilege of engaging in a captivating conversation with Heather Gorr, Ph.D., a senior MATLAB Product Manager... read more >>

]]>Recently, I had the privilege of engaging in a captivating conversation with Heather Gorr, Ph.D., a senior MATLAB Product Manager and Data Science expert at MathWorks. Our discussion covered a broad range of topics, from her daily work routine to her personal journey into the world of data science.

I held the interview live on the MATLAB Discord Server. I even kicked off the AMA with a Lex Fridman-inspired podcast introduction.

Today we're privileged to be joined by Heather, a force in the world of data science, a Senior MATLAB Product Manager at MathWorks. Trained as a materials science engineer and physicist, Heather's intellectual journey is one marked by a fierce hunger for knowledge and an inexhaustible curiosity that sees no boundaries. She has shown us that the path from the academic world to the tech industry isn't a straight line, but a beautiful winding road of discovery and continuous learning. Today, Heather stands not only at the edge of today's technological innovation, but also at the dawn of tomorrow's potential, creating a space where technology and human ingenuity dance in a never-before-seen ballet. As we dive into this conversation, remember that it is through the insights of brilliant minds like Heather's that we get a glimpse into the fascinating future that lies ahead of us.

Buckle up, and let's go on this remarkable journey together.

Here, I'll share nine insights that emerged from our conversation—each one offering a unique perspective on the field of data science and Heather's approach to her work.

I asked Heather to describe one day from the past two weeks to give us some insight into what she may face on a typical day. Heather happened to choose the same day as the Discord AMA. Heather's description of that day reveals the diverse set of tasks and responsibilities that come with her role at MathWorks. From collaborating on a gorilla facial recognition project to hosting webinars and engaging in live streams, her role is dynamic and versatile, offering intellectual stimulation at every turn.

Heather's drive to embrace a wide array of projects and tasks can be traced back to her insatiable curiosity and pursuit of knowledge. She puts an emphasis on continuous learning as it is a critical facet of the data science field, which is always evolving and expanding.

Why MathWorks? Teaching others, engaging with users, discovering cool stuff, and there's always something new to learn.

- Heather Gorr

Heather started studying physics, spent time in Nashville being a musician, and then found her passion in engineering and data science. This career journey serves as a reminder that career paths are not always straightforward—it's okay, and even beneficial, to take time to find what truly motivates and interests you. Heather now plays in a band and gigs out almost every weekend in New England.

Heather joined Mathworks in January 2013. An early but defining moment in her career was responding to a call to action. Someone at MathWorks needed help on a deep learning project and Heather volunteered to help out and apply her experience from her Ph.D. work. That someone turned out to be Cleve Moler, the creator of MATLAB. Heather's journey is all about responding to opportunities. Her response to Cleve Moler's call to action and the resulting projects, collaborations, and learning experiences illustrate the potential value of seizing opportunities. Being open to unexpected opportunities can lead to rewarding experiences that stretch far beyond your comfort zone.

During the interview, Heather opened up about her dyslexia and ADHD and how she navigated academic and professional challenges providing an important perspective on the intersection of personal experiences and professional growth. Personal experiences, including the challenges we face, can shape our professional journey in significant ways. She felt it was important to share personal struggles and to be real about who we are.

Heather's role at MathWorks involves a mix of highly technical work, such as working on machine learning projects, and communication-focused tasks, such as teaching and hosting webinars. This balance underscores the importance of both technical expertise and communication skills in a data science role.

I asked Heather about a specific machine learning project that she worked on and she told us about a gorilla facial recognition project. What was fascinating about this project is that Heather was not trained specifically for this project, but she was prepared to handle it by applying her knowledge from other deep learning projects. Heather's work on the gorilla facial recognition project showcases the innovative applications of data science and machine learning and how important it is to work on diverse projects. You never know what you will be working on and how your experience will be impactful.

Heather frequently mentions collaborations in her work, such as her projects with Cleve Moler and others at MathWorks. She wanted to highlight the importance of collaborative work in data science, where different team members can bring unique perspectives and skills to solve complex problems.

Heather made it clear that she loved teaching others. Her teaching experience and her enthusiasm for sharing her work through webinars and blog posts highlight her passion for education. It's a reminder of the importance of knowledge sharing and mentorship, which not only helps others learn but also fosters a strong, collaborative community.

My conversation with Heather Gorr not only shed light on her work and philosophy but also provided unique insights into the ever-evolving world of data science. Her journey, experiences, and perspectives have a lot to offer to anyone interested in the field. To follow Heather's work and contributions, you can find her on LinkedIn or visit her MathWorks community profile.

]]>