Feature Requests

Multi-View External Touchscreen Mode (Show 4-6 Independent Screens on One Large Display)
Description: Add an external-display mode that can show multiple independent app views ("screens") simultaneously on one large third-party touchscreen (24" or larger). The goal is to run a single device as the brain, while presenting 4-6 separate, touch-interactive panels on one big screen (e.g., Clips, Mixer, Widgets, Set List, Browser, Actions). Problem: Large performance rigs often need multiple views at the same time: On iPad, users constantly switch pages/panels (Clips <-> Mixer <-> Widgets <-> Browser), which costs time and increases live-risk. Using multiple iPads/iPhones as "visual units" works, but it adds hardware, power, cooling, cabling, and reliability complexity. A large external touchscreen has enough real estate to replace several small screens, but there is no way to display multiple independent views at once (instead of a single mirrored or single-window view). Problems: Excessive view switching during performance increases mistakes and slows operation. Multi-device setups are expensive and fragile (battery, heat, mounts, networking, sync). A single mirrored view on a large display does not solve the "many panels at once" workflow. Proposed Solution: 1) External Display "Multi-View" mode When an external display is connected, offer a mode where the external screen can be split into 2/4/6 panels (user-selectable layouts). Each panel can show an independently chosen view, for example: - Clips page (or a specific page) - Mixer - Widget canvas (a specific widget page) - Project browser / set lists - Actions / sequences monitor - Plugin windows (optional) 2) Touch routing and interaction Touch input on the external touchscreen should interact directly with the panel being touched. Optional "lock panel" to prevent accidental edits (performance safety). 3) Panel assignment and persistence Each panel has a selector to choose its content. Save panel layout per project, or as a global preset ("Stage layout", "Studio layout"). 4) Performance-focused options "Always-on" critical panels (e.g., master meters + CPU/DSP + battery if available). Optional large-font / high-contrast mode for distance viewing. Optional "safe mode" where only whitelisted controls are touchable. 5) Compatibility strategy If iOS limits true multi-window external UI, provide the best feasible approximation: - Dedicated external UI that renders multiple panels while the main device UI remains normal. Graceful fallback to current single-view behavior on unsupported hardware. Benefits: Replaces multiple small devices with one large, touch-capable surface. Dramatically reduces live navigation friction (no constant panel switching). Improves safety and speed: key controls remain visible and reachable at all times. Cleaner stage ergonomics and simplified rig (less power management, fewer mounts/cables). Examples: 6-panel stage layout on a 24" touchscreen: - Top row: Clips (main page), Mixer, Set List - Bottom row: Widgets (performance macros), Actions/Sequences status, Master meters Rehearsal layout: - Large clip grid + mixer + browser, with "lock" enabled for non-critical panels. Busking/compact rig: - 4 panels: Clips, Widgets, Master meters, Quick browser, replacing a multi-phone display setup. This summary was automatically generated by GPT-5.2 Thinking on 2025-12-29 . Original Post: 6 loopy pro screens to appear on one 24” or larger 3rd party touchscreen. Allow 6 loopy pro screens to appear on one 24” or larger 3rd party touchscreen. For immediate and very spontaneous live performance access (quick grap and go on larger format) designed for a quick glance by front man guitarist constantly engaging audience with non stop between songs 45 min set ie ….one screen all drum and drum loop info, one screen prerecorded.mp3 stereo tracks, etc,etc
2
·
chatgpt-proof-read-done
·
under review
Stable ID-Based MIDI Binding for Clips and Widgets
Description: Implement a robust MIDI binding system that links MIDI controls to persistent internal IDs for clips and widgets, instead of to their current index, position, or label. The goal is to keep MIDI mappings stable across layout edits, reordering, and design changes, so controllers remain reliable as projects evolve. Problem: Currently, MIDI bindings appear to be tied to positional or name-based references (for example, "Clip 15" or "Slider 7"). When the user adds, deletes, or reorders clips or widgets, these indices shift, causing existing bindings to: Point to the wrong clip or widget, or Break entirely and require manual rebuilding. This leads to: MIDI mappings becoming unreliable after even minor layout tweaks. Repeatedly having to re-check and rebuild bindings when refining a page or moving UI elements. A strong disincentive to iterate on layouts once MIDI is set up. High risk of failure in live performance, where a moved clip or widget can silently invalidate mappings. Users report that even moving a clip slightly on the canvas can change its internal numbering and break mappings, forcing a tedious verification pass over "each and every binding" after small visual adjustments. It also limits practical reuse of MIDI setups across pages and projects. Proposed Solution: Introduce ID-based object binding for all MIDI mappings: Assign each clip and widget a persistent internal ID (unique object identifier). When a MIDI binding is created, store the binding against that internal ID, not its index, row, column, or label. Ensure that: - Reordering clips or widgets on a page does not affect their MIDI bindings. - Moving a clip or widget between rows or positions on the canvas leaves its bindings intact. - Duplicating a clip or widget (including across pages or projects) offers the option to either: - Copy the bindings along with the object, or - Create a clean copy without bindings (user-selectable behavior). - Only explicit deletion of a clip or widget invalidates its associated bindings. Implementation notes: Provide a safe migration path for existing projects (e.g. converting current index-based bindings to ID-based on load). In the control settings / profiles UI, display the bound target by name and location for user clarity, but internally use the stable ID. Optionally expose a "relink target" function for reassigning a binding to another object without recreating it from scratch. Benefits: MIDI mappings become resilient to layout changes, renaming, and reordering. Users can freely refine pages, move clips a "few centimeters" for better ergonomics, or redesign a performance layout without destroying their control setup. Greatly improved reliability in live contexts, where any unexpected re-mapping is unacceptable. Enables copying individual clips or widgets (with their bindings) across pages and projects as reusable building blocks. Encourages experimentation and modular UI design, fully aligned with Loopy Pro’s flexible canvas concept. Examples: Layout refinement: - A row of clips is moved down to make room for new controls. - With ID-based binding, the same footswitches still trigger the same clips as before, regardless of their new positions. Reusing a performance widget: - A "Record/Play/Stop" button widget with carefully tuned MIDI bindings is copied to a new page or project. - The copy retains its mappings to the intended target clip or role, instead of reverting to default or pointing at the wrong object. Multi-song setups: - A user designs a template page with a grid of clips and a bank of widgets mapped to a MIDI controller. - They duplicate the page for Song 2 and Song 3, adjust clip contents and layout, and all bindings continue to work without manual re-learning. This summary was automatically generated by GPT-5.1 Thinking on 2025-12-27 .
4
·
chatgpt-proof-read-done
·
under review
Channel Strip Presets – Save and Recall Full Channel Plugin Configurations
Description: Enable saving and recalling entire channel strips (including all plugins and optionally widgets) as presets. This allows fast loading of complex FX/instrument chains across different projects. Problem: Rebuilding plugin chains and settings for each project or channel type (input, color, bus, master) is time-consuming, especially in live setups or when consistent tone is needed across songs. Proposed Solution: – Add a feature to save a full channel (strip) as a reusable preset – Optionally include bound widgets and layout elements – Allow importing/exporting channel strip presets – Add "recall channel preset" and "save channel preset" actions – Allow presets for: – input channels – color channels – bus channels – master channel – Optional: grey out unloaded channels to free up RAM/CPU – Optional: preload (but inactive) channel strips for fast switching – Optional: include full widget configuration (position, behavior, etc.) – Add tags or labels to organize channel strip types (e.g. “Guitar FX”, “Vocal Bus”) Benefits: – Save time during project setup – Allows modular, reusable channel configurations – Supports consistent tone in live and studio workflows – Ideal for musicians switching instruments/effects between songs – Reduces CPU/memory load by offloading unused channels – Encourages structured and repeatable setups This summary was automatically generated by ChatGPT-4 on April 30, 2025.
12
·
chatgpt-proof-read-done
·
planned
Multi-Target Morph Pad With Zones, Rails, and High-Resolution Output
Description: Add a new control-surface widget: a "Morph Pad" that can continuously morph (interpolate) between multiple user-defined target states. Each target stores values for many destinations at once (AUv3 parameters, mixer controls, sends, actions, and/or MIDI). The performer moves a cursor (finger) on a 2D pad, and Loopy Pro outputs a smooth blend across all assigned destinations. The key goal: one gesture should reliably and repeatably control many parameters ("XYZABC..."), not just X and Y. Problems: Complex transitions currently require many separate controls (sliders/knobs) or multiple XY pads, which is slow to build and fragile live. Live morphing across many parameters is hard to hit precisely and hard to repeat. Freeform touch control without target/snap logic can cause jitter near boundaries and makes it difficult to land on musically meaningful states. Users who want "morphing" often depend on external apps/controllers, adding routing complexity and failure points. Proposed Solution: 1) Morph core: Targets (the foundation) Allow adding N "Targets" (e.g., 2–16+). Each Target stores a full snapshot of assigned destinations (any number of parameters/controls). During performance, compute weights per Target (distance-based interpolation) and output interpolated values to all destinations in real time. 2) Live-safe precision Optional "Magnet/Snap" to Targets (strength + radius). Optional hysteresis/stability to prevent flicker when hovering near boundaries or between Targets. Optional release behavior: hold last value, return to a default Target, or spring to center. 3) Zones (aligned, performance-oriented) Provide an aligned zone editor (rectangles/polygons with snap-to-grid, numeric sizing/position). Zones serve as: a) Visual overlays (labels) to communicate intent, and/or b) Mapping layers: Zone A morphs parameter set A, Zone B morphs parameter set B. Rationale: aligned zones keep values "anvisierbar" and repeatable under finger performance, while still enabling complex layouts. 4) Rails/Paths (line tool for repeatable morph gestures) Let users define one or more "Rails" (paths). Optional cursor lock-to-rail: the pad behaves like a constrained morph fader along an arbitrary curve. Rails enable stage-proof morphs (Clean -> Big -> Destroyed) with minimal risk of unintended states. 5) Scaling, curves, and limits per destination Per destination: min/max range, curve (linear/exponential/S-curve), invert, smoothing. Optional quantized steps for musically discrete parameters (e.g., rhythmic divisions). 6) High-resolution control output (optional) Internal high-resolution smoothing for AUv3 parameters. Optional high-resolution external MIDI modes (e.g., 14-bit CC via MSB/LSB pairs and/or NRPN) where appropriate. 7) Fast workflow ("build it in minutes") "Add Destination" / learn workflow to capture AUv3 params or internal controls quickly. "Capture Target" button: store current values into selected Target. Copy/paste Targets and mappings, and a clear overview list of all destinations. Benefits: Dramatically reduces UI clutter while increasing expressive control. Enables repeatable, precise morphing between meaningful sound states. Improves reliability for live performance through targets, snap, hysteresis, and rails. Unifies internal control and external MIDI ecosystems without extra routing apps. Examples: FX Morph: one pad morphs Reverb mix, Delay feedback, Filter cutoff, and Drive from "Clean" to "Cinematic" to "Aggressive". Loop Scene Morph: crossfade track groups, adjust send levels, and tweak global FX with one gesture. Safe Rail: a single "Clean -> Big" rail that is easy to hit and repeat under stress. Zone Layers: top half morphs "Ambient FX", bottom half morphs "Rhythmic FX", with identical hand motion producing different musical outcomes depending on region. This summary was automatically generated by GPT-5.2 Thinking on 2025-12-27 .
1
·
chatgpt-proof-read-done
·
under review
Retrospective Loop Waveform Visualization (Pre-Record Buffer + Instant Waveform)
Description: Add a retrospective (pre-record) loop buffer and immediately display a waveform visualization for the captured audio once the loop is created. The goal is to support “I played it already” moments: users can capture the last few seconds/bars of performance and instantly see the waveform for editing and confidence. Problem: In live looping, great moments happen unexpectedly: Users start playing, then realize they want that phrase as a loop after it happened. Without retrospective capture, the moment is lost or must be recreated. Even when audio is captured, lack of immediate waveform feedback makes it harder to confirm what was recorded, where the transient start is, and whether trimming is needed. A retrospective buffer plus waveform display would reduce friction and improve confidence when creating loops from spontaneous performance. Proposed Solution: 1) Retrospective record buffer Maintain a continuous circular buffer per selected input/bus (opt-in to control CPU/memory). Allow “Capture last …” actions: - Last X seconds - Last X beats/bars (tempo-synced), if applicable Optionally provide multiple capture lengths (e.g., 2 bars / 4 bars / 8 bars) as separate actions. 2) Automatic loop creation from buffer When triggered, Loopy creates a new loop/clip from the buffer content. Provide alignment options: - Capture aligned to bar boundaries (quantized) - Capture “as played” (free) Optional transient detection for better start points (nice-to-have). 3) Waveform visualization immediately after capture Once the clip exists, show a waveform view right away: - In the clip view/editor - Or as a temporary overlay preview Include basic markers: - start/end points - loop boundary - playhead 4) Editing integration Quick trim/adjust loop points directly from the waveform view. Optional “fade in/out” for click prevention (if not already present). 5) Performance and quality considerations Configurable buffer length and sources to limit memory/CPU usage. Use low-latency waveform generation (generate a coarse waveform first, refine in background if needed). Benefits: Captures spontaneous musical moments that would otherwise be lost. Faster loop creation: no need to re-play parts just to record them. Immediate visual confirmation improves confidence and reduces re-takes. Waveform-based trimming makes loops cleaner and more musical (better starts/stops). Examples: Spontaneous riff capture: - User plays a great 4-bar phrase, then taps “Capture last 4 bars” to instantly create a loop and see the waveform for quick trimming. Free-time texture: - User improvises an ambient swell, then captures the last 12 seconds and trims visually to a clean loop boundary. Live reliability: - Waveform view immediately shows a clipped transient or off-start point, allowing a quick fix before the loop enters the mix. This summary was automatically generated by GPT-5.2 Thinking on 2025-12-29 . Original Post: When using retrospecive loops it’s hard to know if I covered a whole loop with audio. To visualize the waveform inside the loop before captured would help to know how the final result will be once I close the loop and play it.
2
·
chatgpt-proof-read-done
·
under review
Support Multi-Output AUv3 Effects (Route Each Output Bus Independently)
Description: Enable routing of AUv3 effects that expose multiple output buses (e.g. 4 outputs) so each output can be sent to different inputs/tracks for separate processing and mixing. Problem: Some AUv3 effects provide multiple discrete outputs (for example, separate taps, layers, or processing paths). Currently, these multi-output effects behave like single-output units, so their additional outputs cannot be routed independently. This prevents common workflows like splitting an effect's individual outputs to different tracks for separate EQ/comp/reverb, parallel chains, or distinct mixes. Proposed Solution: 1) Expose AUv3 output buses as routable nodes When an AUv3 effect reports multiple output buses, display each bus as a selectable output source (e.g. "Out 1", "Out 2", "Out 3", "Out 4"). Allow routing each bus to different inputs/tracks/mixers. 2) Add an output-bus selector per route (minimal UI option) In the routing UI, add a field like "Output Bus" for AUv3 effects. Default to the main/mixed output for compatibility, with manual selection for additional buses. 3) Clear channel/bus mapping and monitoring Show how each bus maps to channels (mono/stereo) to avoid confusion. Provide a safe fallback (downmix or "main out only") if a project is opened on a system where the AUv3 reports a different bus configuration. 4) Optional: per-bus level meters Lightweight metering for each exposed bus to confirm signal is present and routed correctly. Benefits: Unlocks advanced multi-chain workflows on iOS/iPadOS with AUv3 effects that are designed for multi-output operation. Enables separate processing per output (EQ/comp/reverb), parallel routing, and cleaner mixes. Makes complex performance setups more reliable and predictable (no hidden downmixing). Examples: Use Bram Bos Scatterbrain with its 4 outputs: - Route "Out 1" -> Track A (dry-ish) -> compressor - Route "Out 2" -> Track B -> shimmer reverb - Route "Out 3" -> Track C -> band-pass + delay - Route "Out 4" -> Track D -> distortion + limiter Sound design: Send different effect outputs to separate loopers/tracks to record and layer each output independently. Live mixing: Keep one output as the main mix, while routing another output to a dedicated track for sidechain or audience FX send. This summary was automatically generated by GPT-5.1 Thinking on 2025-12-09 . Original Post: Trying to use bram Bos Scatterbrain with its 4 ouputs in Loopy Pro but it seems that is not yet possible? Would be my main feature request to be able the route multiple outs from an fx to multiple inputs for seperate processing.
1
·
chatgpt-proof-read-done
·
under review
Load More