This page aims to provide integration of musician tools within GNOME: tools to learn, write/compose/generate and train music.
Todo
- Analyzing existing solutions
- Fill the feature matrix for each solution
- Write specifications
- listing what each device should enable to do in the solution: data input/software control
- Make a feature matrix
Specifications
This project aims to provide a solution for musicians integrated within GNOME 3 UX. This section details what this goals imply.
The solution should of course respect all the GNOME 3 design paradigm, nevertheless it should specially emphasize on this principle: design a self-teaching interface for beginners, and an efficient interface for advanced users but optimize for intermediates. While it's important to integrate to existing solution on top of jack and midi synthesizer basic user should not have to care about it. The solution should come usable out of the box.
If the solution must cover three mains topics (learn, write, train), whether this should be a all in one solution will need more investigation. The goal is to provide one homogeneous set of tools able to work together. Which mean being able to synchronize on a edited project : using a common sequencer, but also "distributed" live edition. A network shareable project edition would also be an important feature.
The solution should enable users to read and manipulate projects through different representations: classical partitions and tablatures, but also linkable composition boxes (ie. like image composition in blender), square matrices with different granularity (measure, beat, etc), and so on.
It should be easy to select a set of notes, whether by
- pointing devices
- a more advanced filtering tool
- select forms with a given set of rangeable criteria
- with little program using an well documented easy to use API for even more complex tasks
Then user should be able to
- perform basic action:
- paste somewhere else
- move on beat left/right
- only the selected note
- shifting only notes in the same measure
- shifting on the whole track after each group treated
- …
- generate more complexe transformations from this set
- inversed progression (mirroring) in time/pich
- infer inductive progressions and generate the following steps
- …
Here is a short list of what the solution should be able to take as an input:
- mouse keyboard
- midi devices, like synthesizer
- jack devices, like guitars
- anything that a jack server may provide as input
CVS and diff visualisations integration
Non-goals
The solution aimed here don't include providing a sound editor. While user should be able to integrate samples in their projects and make basic manipulations on it
Existing solutions
See wikipedia more extending list of existing solution.
Feature matrix
learn |
|||
Feature name |
Description |
available |
|
Feature name |
Description |
not impossible |
|
Feature name |
Description |
|
not available |
train |
|||
write/compose/generate |
External links
Ressources on computer music: