IFR and ABC Music Notation

Hello IFR friends,

Based on Jef Moine’s ABC Notation Extension Module [1] (in JavaScript) to implement Phil Nice’s MDNN [2] I have programmed a similar module - the IFR-Module within the abc2svg-family - to realize David Reed’s IFR approach to music.


The first area shows the melody in conventional staff music notation, with the lead-sheet chords right under it.

The next area shows the same melody notes in the T-Line, this time in the tonal map of the given Major key, and marks them according to their appropraite octaves: unmarked for the basic register, a '-superscript for the next higher octave, a ,-subscript for the lower octave, and so on. The rhythmic values are also shown.

The H-Line lists the basic Harmony, ie the Chords spelled notated IFR-wise and spelled out as Triad or Tetrad.

My ABC-notation for Guitar Chords supports some additions eg
“CM7:#x1/G” leads to the C Maj7 chord with a #11 extension and with G in the bass.
However the spelling in the H-Line still shows the basic Triad or Tetrad only.

The ABC-script to render the song in a local browser is as follows:

Is this approach worthwhile at all? Anyone interested in me making the module available to others?
In that case, any suggestions or hints for improvement are most welcome.

Friendly yours,
Stenio L.

Jef Moine: http://moinejf.free.fr/
Phil Nice: The Case for Numerical Music Notation. Part 1: Introduction and History | by Phil Nice | Medium

Welcome to the forum @slingaya

Oooo. Exciting. abcnotation is great. I’ve used StackEdit to do things like this:

Your extension sounds interesting.

My dream abc addon would be something that could produce an IFR melody contour like this (the original of which I hand coded in html & css, with the css mainly being used to ‘squash’ the line hight in the melody contour sections - the same could be done in a word processor). I prepared this while doing Jelske’s Introduction to Melodic Improvisation course.


1 Like

This is soooo cool. But…. Some of us are not so tech savvy. Any chance of getting some foolproof step by step instructions?

abcnotation is a music oriented specialism of a thing called ‘mark down’. The home for abcnotation is https://abcnotation.com/. There are a list of examples, tutorials, etc., on that site at this page https://abcnotation.com/learn.

The tutorial/guide that got me started was the one by Steve Mansfield: http://www.lesession.co.uk/abc/abc_notation.htm

To use abcnotation you need a ‘markdown editor’. That editor might be an application you download to & run on your computer, or it could be a thing you use in a browser. The ‘StackEdit’ I mentioned is an example of a browser based option: https://stackedit.io/ & is what I started with. I’ve also used ‘EasyABC’ on a Windows desktop. There’s a list of software at https://abcnotation.com/software.

Sorry that’s not a personal step by step guide as such, but I’m a user not an expert & other people who are experts have written such things already (see above). Markdown & abcnotation are almost inherently at least a bit techie, but I think you may find them well worth investigating if you have a need you think they might help fulfil?

I hope that helps.

Thank you David for that in depth reply. I will look into that and see how much effort it would be for me.

My approach so far has been to practice every few days with one of my fake books. I’ve got a bunch of fake books. I pick a song I’ve heard before and say … ok we’re in the key of F. Then I read the music saying… starts with a 5, up to 1 2 3 over the 1 chord, etc. since I’ve heard the song the melody should play in my head. I try to pick pieces in different keys. With a little practice like that, I can now see the tonal numbers in a few keys like C, F , G. I’m sure the rest are just a matter of time and diligence.

Hi @slingaya and welcome to the forum.

I was interested to read your work on ABC notation. I use ABC (and the EasyABC software) and currently I thought I could use it to generate some sound files which reproduce the SingThe Numbers tracks.

So, I have a list of number sequences, which I have noted down while listening to the tracks. Is there any way I can get from this numbers list to an ABC file?

I’ve decided to stick with my manual method, but I am going to switch over to GOODNOTES as a rendering tool on my iPad, and see if I can leave pencil and paperbehind

Re-reading, it occurs to me that I assumed you meant the abcnotation stuff, but it’s possible you meant the IFR melody contour example for “When I Fall In Love” that I posted? Preparing that had nothing to do with ABC, all it requires is some very careful (some might say tedious) editing of text - or the basic idea can be adapted & used in any wordprocessor that allows you to later line spacing. I think I may have written about that on LearningStone when I first did it?

FX: Time passes while I search.

In case it’s of interest I found this…

You can probably do most of what I did using WORD (or any other word processor). If you’d like to give that a go, try using this recipe:

a) Choose a non-proportional (i.e. fixed width) font, e.g. Courier. I find the fixed width makes it easier to line things up.
b) [optional] Set the option to show ‘spaces’ as feint dots (Tools menu > Options > View tab > Formatting marks?). I find that helps when setting out the numbers.
c) Type in the lyric line.
d) Above (or if you prefer, below) that type in the numbers over the appropriate words, giving each number ‘level’ its own line.

That should give a tonal sketch, but the numbers will occupy a lot of space, so…

e) Reduce the ‘Line Spacing’ on the numbers section, to give a more compact (& to me more ‘flowing’) look/feel.

I suspect that with WORD you may find (as I did with Libre Office) that you can’t reduce line spacing below 1 without odd things starting to happen (i.e. loosing bits of the numbers). That’s why I opted to use html, because with that I have the option to reduce the linespacing even further. Working in html is however less intuative as it typically means working in a text editor which is not ‘WYSISYG’ (if you recall what that means?).

I hope that makes sufficient sense. If not, please ask.

If you’d like to see the html version (so you can look at the source code & get a feel for what’s involved) we probably need to exchange by some other means, such as email since I can’t upload html here (a thing that seem like a very reasonable Forum restriction).

Ah. I recalled that we can use BBCode here, so I can at least post in a little bit of the ‘source’ for the html version of “When I Fall In Love”. Here’s what the first line loosk like as ‘raw’ html in a text editor. The <tonalsketch> & <lyrics> markup are things bits of configuration ‘magic’ that are defined elsewhere in the file & that don’t need changing for each song (I’m not sure why some bits are in different colours - that’s some sort of ‘Forum magic’!).

       4                    4                                4        
            3                  3                                  3   
                                   2                                 2
    1          1       1         1                1                   
5                   5                          5                      

When I fall in love it will be forever  /      Or I'll never fall in love


My goal is to create my own library of tunes in the IFR notation from the book, which shows melody and chords by bar. No lyrics. I’ll use this to play the tunes in all keys, and I hope start to remember better what different phrases sound like (I want to say, “aha, that’s a descending arpeggio from the 6th over the 4 chord, etc.”)

I’ve attached my first stab at creating these in GOODNOTES on my ipad (it’s an $8 app), instead of pencil and paper. It took a couple of hours to learn the basics of GOODNOTES, and I decided to use music paper as the background, only because it makes it easier for me to line up the little numbers.

If you saw my handwriting you’d immediately know why I want to use a simple graphics tool. But beyond that, I think using pencil/paper or a simple graphical tool will make you actually focus better on the tune as you construct the tonal trascription, as David Reed calls it. I expect with a little more practice, GOODNOTES will be just like pencil/paper to me.

Great stuff! I tried doing that but my technical skills weren’t up to it. Instead, I have been trying to visualise standard notation melodies directly in IFR rather than writing down the numbers. It turns out to be fairly logical. So in the key of F with one flat, I see the C as 5 so all lines above and below are therefore the odd numbered scale notes. I’m not good at it yet but I have a feeling it will be more rewarding for me than trying to add IFR numbers physically to music notation. So it’s just sight reading IFR (instead of solfa) rather than adding it as an extra line of notation.

Welcome @slingaya, and thank you to everyone else who has contributed to this thread. I’m far behind all of you in my adoption of these tools. Our current system is taken directly from the Flintstones. On the surface it looks automated but inside the machine is a tiny dinosaur (actually a human named Jessé) who creates most of our graphics using a font that we created (called the IFR font).

Standardizing on the IFR font was a step forward. Having a library of templates and recycled illustrations certainly helps too. But whenever we need to write out a whole new song in tonal numbers, this is still a painfully slow, inefficient manual process.

My own needs for a solution align with what @DavidW and @hender99 described. But in my case, I don’t need the translation from traditional sheet music or from audio recordings. So we don’t need a tool of harmonic analysis. Honestly it would be a huge step forward for us to simply have a tool of harmonic dictation. In other words, even in cases where we already know the tonal numbers to a song, it would be wonderful to have a tool that allows us to input this data without having to use clumsy image editing tools like Illustrator.

What we’re doing today is the musical equivalent of type setting from the early days of newspaper publishing. Every time we wish to publish a song, someone needs to drag and drop every tonal number into its place in an image. Surely this can be updated!

It’s easy to fantasize about very sophisticated solutions which would make use of voice recognition and pitch recognition to automate this entire process for us. But at this point, even just having a simple program to automate the placement of the numbers on the page would be a step forward.

So for example, imagine that I already know the tonal numbers to a song, and I merely want to dictate these tonal numbers to the program. For this purpose, I don’t even need to sing the pitches because I can just type the numbers. I only need the program’s help with two things:

  1. placing each number vertically based on its value relative to the other numbers
  2. placing each number horizontally based on its rhythm (which measure, which beat)

That’s really all we need. And for this purpose, the data entry tool could be the keyboard. The natural notes 1 though 7 could just be the keyboard keys 1 through 7. And whenever we need to express an outside note like b3, this could just be the letter key directly below keys 2 and 3 on a standard computer keyboard. (I’m talking about the letter W if you want to look at your keyboard to see what I’m picturing.)

When I type this letter W, by default the program could place the note b3 on the page. Then after the entire song has been entered, there should be some easy way to mouse over that note b3 and convert it to #2 if I prefer to express it that way. (And of course there needs to be an easy way to select and manually correct any of the entered notes, because mistakes will happen.)

The scheme above solves the problem of how to tell the computer which tonal numbers we want written. The next question is the vertical placement of these notes based on their pitch. This is a bit more complex because the program will have to make some judgments about this only AFTER all notes have been entered. But I think there must already be algorithms created to distribute objects vertically within a given region based on their values. If not, it should be a straightforward thing to code.

The only remaining problem is the distribution of the notes horizontally based on their rhythm. For this, we just need the input process to be a live recording. There should be a parameter to let the user specify a tempo, and the program should provide a metronome beat. There should be a record button. When you press record, the program should start with two measures of count-in. Any keys pressed during these two measure of count-in should also be recorded, and should be understood as pickup notes that are played before measure 1 officially begins. And until the record button is pressed again to stop the recording, the user should be able to type notes on the keyboard, and these notes should be displayed on the page at the exact moment in which they are typed. So for example, if I press the number 3 on my keyboard roughly on the first beat of measure 2, then a number 3 should be printed on the page as the first beat of measure 2.

In existing music dictation programs, much of the complexity comes from the notation of rhythm. For example, if somebody types a note just a millisecond too late, you don’t want the program to interpret that as a “64th rest” followed by a tie of 63 separate 64th notes making up the rest of the beat. But for IFR, I would propose that we simply sidestep all of that complexity by avoiding any rhythmic notation whatsoever. All we want from the computer is help with the initial dropping of the numbers on the page. If we can just get each number into the right place both vertically and horizontally, any errors in horizontal placement can be fixed very quickly with a simple drag and drop of the mouse.

As you can see, I’m not looking for a program that “understands” the music in any way. We just need a program that allows us to enter data in time through the mechanism of a “record” button, and that spaces out this data horizontally according to the time in which each keystroke was entered. And then we want that program to also distribute these objects vertically based on the values entered. As for the question of multiple octaves, this could either be solved at the moment of data entry using different keys (which I personally don’t like) or it could be solved after all of the data is entered. To fix the octaves after the recording is finished, there just needs to be an easy way to select a bunch of notes with the mouse and choose to raise or lower them by one octave.

And again, remembering that this is only a data input tool, the program should then turn control back over to the user to make any manual adjustments or additions. If this program could somehow interface to Adobe products like Photoshop and Illustrator, that would be incredible. But does that even exist?

I don’t mean to hijack the conversation. I just wanted to add one more set of requirements, just to give another vision of how a tool like this might be useful. But I appreciate that other people have other requirements, and perhaps we’re talking about different tools altogether.

I wonder if the Musescore software might be of assistance @ImproviseForReal ? I’ve only tinkered with it and am barely even a beginner let alone an expert, but I think I read somewhere that you can set up custom note heads? If it was possible to have your font as note heads that could be amazingly useful!

Yes, that’s a great suggestion, @DavidW. We’ll definitely explore all technology options before the next time that we need to mass produce tonal sketch drawings. For now we’re just kind of limping along, but I appreciate the conversation and I would like to be smarter about these things.

@ImproviseForReal re. MuseScore.
I did a quick search…

There are various notehead options (https://musescore.org/en/handbook/3/noteheads), but I’m not sure that’s necessarily helpful.

‘Notehead schemes’ look more relevant (Notehead schemes | MuseScore).

Those give a different look for each note/scale degree, and include a moveable do solfege option.

The obvious (to us!) omission is actual IFR like scale degree numbers. :frowning:

However, since at least some of these styles originated as a ‘plug-in’ facility (this one, maybe) in an earlier version, maybe there’s a way given someone with appropriate skills (or perhaps suggest it to the developer community)?

I’m not certain, but I suspect that various other parts of what is normally visible in Musescore could be made invisible one way or another to get a more IFR like view?

If you could get a Musescore based solution close to what you want, input might be as simple as using a MIDI keyboard! https://musescore.org/en/handbook/3/note-input?

N.B. I am not an expert, this is speculation on some just stuff I found using the excellence of modern search engine technology. :wink:

Hello IFR friends,
I’m back again with a revised version of my adaptation of Jef Moine’s ABC Notation Extension Module to use in the IFR world. (s. original posting of Jun’22).

My Javascript-module is named STCN for Standard - Tonal - Chordal - Notation.

The first area shows the melody in standard music notation, with the lead-sheet chords (eventually with extensions and alterations) right above it.
The original rhythmic notation is preserved (duration, ties, slurs, triplets,…). The current key is also shown.
(Caveat: I have not thoroughly tested modulating to another key yet …)

The next area shows the real analysis stuff.
The lead-sheet chords shown IFR-wise as degrees in the current key, respecting the chord quality and using the slash to denote non-root bass.
This line shows the melody notes in the TONAL map of the given Major key, and marks them according to their appropraite octaves:
unmarked for the basic register, a '-superscript for the next higher octave,
a ,-subscript for the lower octave, and so on.
Tied-notes are shown in a slightly smaller font with a sort of underscore.
Rests are displayed as small square irrespective of duration, but in sync with the rest-marks in the standard notation.
This line shows the melody notes in relation to the root of the current chord. I call this the Chordal system - for a few beats I am now living in the world defined by this chord.
Same remarks apply to ties and rests as for in the T-line.
As a beginner I have found it most supportive for playing Chord melody to know at a glance the quality of the world/chord I am currently living/playing in. Thus this line is restricted to basic triads or tetrads.

My ABC-notation for Guitar Chords supports some additions eg
“CM7: #x1 /G” leads to the C Maj7 chord with a #11 extension and with G in the bass.
However the spelling in the Q-Line will show only the basic Triad or Tetrad.

The ABC-script to render the song in a local browser is:

Is this approach still of interest at all? Would anyone want me making the module available to others? (I’m still looking to package in a self-containing app under Windows 10 - that could take some time …)
Anyway,suggestions or hints for improvement are most welcome.
Friendly yours,
Stenio L.

Wow! That is both fascinating, impressive (& comprehensive. Well done.

The only advance I could think of would be some form of ‘melody contour’ for the T-Line melody note numbers, i.e. vertical position within the line varies (at least a little) with number, as IFR do in their charts, but I appreciate that could be very complicated. What you already have is a great advance.

I’m moderately technical but at present I’d have no idea how to integrate it into an ABC renderer, so access to the module would only be of any use if it also included notes on that integration process (or a pointer to where that information can be found).

I can’t say for certain that I’d use it (I’m moderately happy with informal my hand written charts, and pretty busy with other stuff), but if I found the time it would be a very interesting possibility to investigate.

Thank you very much for sharing this glimpse at the work in progress.

I’ll tag David Reed @ImproviseForReal so this gets bumped for him.

Would it be possible for you to write up 1) step by step installation/setup instructions and 2) a step by step user guide for getting a song rendered? I’d certainly be happy to see that.

Doesn’t have to be fancy… just accurate and easy for the less technical of us.

@slingaya Hi Stenio, thank you for sharing this with our group. And thank you @DavidW for tagging me. It’s very interesting to see how you’re approaching this problem. Whatever the technical details or logistical issues are, the important thing is the creativity and thought that you’re putting into framing the problem in the first place and imagining solutions.

But because you mentioned that your intention is to create something for use “in the IFR world”, let me paint a picture of some of the issues on our side.

  1. COPYRIGHT. For us to be able to recommend (or possibly even distribute) a tool like this, all song content would have to be input by the user somehow. We can’t distribute a tool that already contains information about song melodies, because these melodies are now “owned” by a handful of parasitic “publishing companies” who for some reason have not yet gone extinct. If the logic of the free market prevails, eventually these companies will die a painful death because they add no value. But ownership is a pretty sacred principle in the modern world, and for the moment these companies have been able to buy up and monopolize vast swaths of our musical culture. So this is an issue that is paralyzing an enormous amount of teaching, art, and new innovation. But it’s our current legal reality, and it’s directly relevant to the applications for this kind of tool.

  2. SUBJECTIVITY. Music is an abstract art form. We can study its raw materials in the controlled environment of a teaching method like IFR. But the goal of this teaching is not to develop a methodology for classifying every sound in the world and giving it a persistent, objective label. So music is not like chemistry, where we have a finite number of elements that we can classify in a periodic table. It’s more like mathematics or philosophy, where any given concept can be analyzed from multiple points of view. So the goal of the IFR method is not to give our students a formula for converting written notes into tonal numbers. The goal is to give our students the personal resources to relate their OWN feelings to the tonal numbers. But the same note that feels to you like a 6 could feel to me like a 2. Especially if the other notes of the song do not fit neatly into any single major scale, then there is no mathematical way to establish which point of view is “better”. This doesn’t diminish the utility of the tool you’re developing. It can still be an incredibly valuable teaching tool. But its greatest value would be to permit the user to quickly switch between multiple points of view, allowing the USER to decide which point of view to adopt. For example, let’s say that the user has already input all of the notes to “Summertime” (to build on your example), using absolute note names like C#. The next step should be that the user can select a key center from which to analyze the song. For example, given the notes and chords to Summertime, the key of C is an obvious choice. So what happens when I tell the app to analyze the melody and chords relative to C as the key? Okay, that’s interesting. But since the main tonal center is Am, next I would like to see a complete analysis in tonal numbers where note 1 = A. That will produce the familiar concepts of a “minor scale”. For example, that first Am chord will now be shown as 1-. While this is not the primary point of view that we would teach first in IFR, it is a valid and interesting point of view. And of course there can be others. So ideally, this tool should help the user perform his or her own harmonic analysis. It shouldn’t attempt to analyze a song for the user. Instead it should assist the user in quickly visualizing the consequences of any particular point of view.

  3. DESIGN. The final issue is the simplicity and beauty of the final result. I realize that this is a WIP and that the presentation layer can be perfected later. But for this to be useful to IFR students, you would have to use the actual IFR symbols, and we would have to find a way to present the content that is much easier to understand and process at first glance. But I realize that all of this can be designed later, and that perhaps the issues above are more relevant to deciding which direction you want to go.

So those are some of the issues that I see with the development and adoption of a tool like this. But if all three of these issues can be solved, then I think it could be incredibly useful (and great fun) to have a tool like this to facilitate instant harmonic analysis using parameters input by the user.

I hope these comments add something to your assessment of the landscape. Whichever directions you decide to explore, I hope you’ll continue to let us know about your progress. And even if you decide that you need to develop your tool in a way that doesn’t quite fit what we do in IFR, remember that the world is much, much bigger than IFR, and there are many different ways that you can make an important contribution to the world’s understanding and enjoyment of music. So above all, I just encourage you to keep going, and to think deeply about which direction YOU want to go with this idea. And if we can help in any way, just let me know.

I like to use ABC notation, but like to keep things looking simple and clean. And while technology can be useful, it sometimes pays dividends to sit quietly with a piece of paper and a pencil and draw rising and falling sequences.

What drew me to IFR was the simple way it lays out melody and harmony using the numbers, and I have been combining ABC notation and IFR numbers when notating some of tunes I’ve been learning.