Filtered by vendor Linux
Subscriptions
Filtered by product Linux Kernel
Subscriptions
Total
16451 CVE
| CVE | Vendors | Products | Updated | CVSS v3.1 |
|---|---|---|---|---|
| CVE-2024-35974 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: block: fix q->blkg_list corruption during disk rebind Multiple gendisk instances can allocated/added for single request queue in case of disk rebind. blkg may still stay in q->blkg_list when calling blkcg_init_disk() for rebind, then q->blkg_list becomes corrupted. Fix the list corruption issue by: - add blkg_init_queue() to initialize q->blkg_list & q->blkcg_mutex only - move calling blkg_init_queue() into blk_alloc_queue() The list corruption should be started since commit f1c006f1c685 ("blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()") which delays removing blkg from q->blkg_list into blkg_free_workfn(). | ||||
| CVE-2024-27005 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 6.3 Medium |
| In the Linux kernel, the following vulnerability has been resolved: interconnect: Don't access req_list while it's being manipulated The icc_lock mutex was split into separate icc_lock and icc_bw_lock mutexes in [1] to avoid lockdep splats. However, this didn't adequately protect access to icc_node::req_list. The icc_set_bw() function will eventually iterate over req_list while only holding icc_bw_lock, but req_list can be modified while only holding icc_lock. This causes races between icc_set_bw(), of_icc_get(), and icc_put(). Example A: CPU0 CPU1 ---- ---- icc_set_bw(path_a) mutex_lock(&icc_bw_lock); icc_put(path_b) mutex_lock(&icc_lock); aggregate_requests() hlist_for_each_entry(r, ... hlist_del(... <r = invalid pointer> Example B: CPU0 CPU1 ---- ---- icc_set_bw(path_a) mutex_lock(&icc_bw_lock); path_b = of_icc_get() of_icc_get_by_index() mutex_lock(&icc_lock); path_find() path_init() aggregate_requests() hlist_for_each_entry(r, ... hlist_add_head(... <r = invalid pointer> Fix this by ensuring icc_bw_lock is always held before manipulating icc_node::req_list. The additional places icc_bw_lock is held don't perform any memory allocations, so we should still be safe from the original lockdep splats that motivated the separate locks. [1] commit af42269c3523 ("interconnect: Fix locking for runpm vs reclaim") | ||||
| CVE-2024-26710 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: powerpc/kasan: Limit KASAN thread size increase to 32KB KASAN is seen to increase stack usage, to the point that it was reported to lead to stack overflow on some 32-bit machines (see link). To avoid overflows the stack size was doubled for KASAN builds in commit 3e8635fb2e07 ("powerpc/kasan: Force thread size increase with KASAN"). However with a 32KB stack size to begin with, the doubling leads to a 64KB stack, which causes build errors: arch/powerpc/kernel/switch.S:249: Error: operand out of range (0x000000000000fe50 is not between 0xffffffffffff8000 and 0x0000000000007fff) Although the asm could be reworked, in practice a 32KB stack seems sufficient even for KASAN builds - the additional usage seems to be in the 2-3KB range for a 64-bit KASAN build. So only increase the stack for KASAN if the stack size is < 32KB. | ||||
| CVE-2024-26629 | 2 Linux, Redhat | 2 Linux Kernel, Enterprise Linux | 2025-12-23 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: nfsd: fix RELEASE_LOCKOWNER The test on so_count in nfsd4_release_lockowner() is nonsense and harmful. Revert to using check_for_locks(), changing that to not sleep. First: harmful. As is documented in the kdoc comment for nfsd4_release_lockowner(), the test on so_count can transiently return a false positive resulting in a return of NFS4ERR_LOCKS_HELD when in fact no locks are held. This is clearly a protocol violation and with the Linux NFS client it can cause incorrect behaviour. If RELEASE_LOCKOWNER is sent while some other thread is still processing a LOCK request which failed because, at the time that request was received, the given owner held a conflicting lock, then the nfsd thread processing that LOCK request can hold a reference (conflock) to the lock owner that causes nfsd4_release_lockowner() to return an incorrect error. The Linux NFS client ignores that NFS4ERR_LOCKS_HELD error because it never sends NFS4_RELEASE_LOCKOWNER without first releasing any locks, so it knows that the error is impossible. It assumes the lock owner was in fact released so it feels free to use the same lock owner identifier in some later locking request. When it does reuse a lock owner identifier for which a previous RELEASE failed, it will naturally use a lock_seqid of zero. However the server, which didn't release the lock owner, will expect a larger lock_seqid and so will respond with NFS4ERR_BAD_SEQID. So clearly it is harmful to allow a false positive, which testing so_count allows. The test is nonsense because ... well... it doesn't mean anything. so_count is the sum of three different counts. 1/ the set of states listed on so_stateids 2/ the set of active vfs locks owned by any of those states 3/ various transient counts such as for conflicting locks. When it is tested against '2' it is clear that one of these is the transient reference obtained by find_lockowner_str_locked(). It is not clear what the other one is expected to be. In practice, the count is often 2 because there is precisely one state on so_stateids. If there were more, this would fail. In my testing I see two circumstances when RELEASE_LOCKOWNER is called. In one case, CLOSE is called before RELEASE_LOCKOWNER. That results in all the lock states being removed, and so the lockowner being discarded (it is removed when there are no more references which usually happens when the lock state is discarded). When nfsd4_release_lockowner() finds that the lock owner doesn't exist, it returns success. The other case shows an so_count of '2' and precisely one state listed in so_stateid. It appears that the Linux client uses a separate lock owner for each file resulting in one lock state per lock owner, so this test on '2' is safe. For another client it might not be safe. So this patch changes check_for_locks() to use the (newish) find_any_file_locked() so that it doesn't take a reference on the nfs4_file and so never calls nfsd_file_put(), and so never sleeps. With this check is it safe to restore the use of check_for_locks() rather than testing so_count against the mysterious '2'. | ||||
| CVE-2023-53820 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 7.0 High |
| In the Linux kernel, the following vulnerability has been resolved: loop: loop_set_status_from_info() check before assignment In loop_set_status_from_info(), lo->lo_offset and lo->lo_sizelimit should be checked before reassignment, because if an overflow error occurs, the original correct value will be changed to the wrong value, and it will not be changed back. More, the original patch did not solve the problem, the value was set and ioctl returned an error, but the subsequent io used the value in the loop driver, which still caused an alarm: loop_handle_cmd do_req_filebacked loff_t pos = ((loff_t) blk_rq_pos(rq) << 9) + lo->lo_offset; lo_rw_aio cmd->iocb.ki_pos = pos | ||||
| CVE-2023-53692 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 7.0 High |
| In the Linux kernel, the following vulnerability has been resolved: ext4: fix use-after-free read in ext4_find_extent for bigalloc + inline Syzbot found the following issue: loop0: detected capacity change from 0 to 2048 EXT4-fs (loop0): mounted filesystem 00000000-0000-0000-0000-000000000000 without journal. Quota mode: none. ================================================================== BUG: KASAN: use-after-free in ext4_ext_binsearch_idx fs/ext4/extents.c:768 [inline] BUG: KASAN: use-after-free in ext4_find_extent+0x76e/0xd90 fs/ext4/extents.c:931 Read of size 4 at addr ffff888073644750 by task syz-executor420/5067 CPU: 0 PID: 5067 Comm: syz-executor420 Not tainted 6.2.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1b1/0x290 lib/dump_stack.c:106 print_address_description+0x74/0x340 mm/kasan/report.c:306 print_report+0x107/0x1f0 mm/kasan/report.c:417 kasan_report+0xcd/0x100 mm/kasan/report.c:517 ext4_ext_binsearch_idx fs/ext4/extents.c:768 [inline] ext4_find_extent+0x76e/0xd90 fs/ext4/extents.c:931 ext4_clu_mapped+0x117/0x970 fs/ext4/extents.c:5809 ext4_insert_delayed_block fs/ext4/inode.c:1696 [inline] ext4_da_map_blocks fs/ext4/inode.c:1806 [inline] ext4_da_get_block_prep+0x9e8/0x13c0 fs/ext4/inode.c:1870 ext4_block_write_begin+0x6a8/0x2290 fs/ext4/inode.c:1098 ext4_da_write_begin+0x539/0x760 fs/ext4/inode.c:3082 generic_perform_write+0x2e4/0x5e0 mm/filemap.c:3772 ext4_buffered_write_iter+0x122/0x3a0 fs/ext4/file.c:285 ext4_file_write_iter+0x1d0/0x18f0 call_write_iter include/linux/fs.h:2186 [inline] new_sync_write fs/read_write.c:491 [inline] vfs_write+0x7dc/0xc50 fs/read_write.c:584 ksys_write+0x177/0x2a0 fs/read_write.c:637 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f4b7a9737b9 RSP: 002b:00007ffc5cac3668 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4b7a9737b9 RDX: 00000000175d9003 RSI: 0000000020000200 RDI: 0000000000000004 RBP: 00007f4b7a933050 R08: 0000000000000000 R09: 0000000000000000 R10: 000000000000079f R11: 0000000000000246 R12: 00007f4b7a9330e0 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 </TASK> Above issue is happens when enable bigalloc and inline data feature. As commit 131294c35ed6 fixed delayed allocation bug in ext4_clu_mapped for bigalloc + inline. But it only resolved issue when has inline data, if inline data has been converted to extent(ext4_da_convert_inline_data_to_extent) before writepages, there is no EXT4_STATE_MAY_INLINE_DATA flag. However i_data is still store inline data in this scene. Then will trigger UAF when find extent. To resolve above issue, there is need to add judge "ext4_has_inline_data(inode)" in ext4_clu_mapped(). | ||||
| CVE-2022-50286 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: ext4: fix delayed allocation bug in ext4_clu_mapped for bigalloc + inline When converting files with inline data to extents, delayed allocations made on a file system created with both the bigalloc and inline options can result in invalid extent status cache content, incorrect reserved cluster counts, kernel memory leaks, and potential kernel panics. With bigalloc, the code that determines whether a block must be delayed allocated searches the extent tree to see if that block maps to a previously allocated cluster. If not, the block is delayed allocated, and otherwise, it isn't. However, if the inline option is also used, and if the file containing the block is marked as able to store data inline, there isn't a valid extent tree associated with the file. The current code in ext4_clu_mapped() calls ext4_find_extent() to search the non-existent tree for a previously allocated cluster anyway, which typically finds nothing, as desired. However, a side effect of the search can be to cache invalid content from the non-existent tree (garbage) in the extent status tree, including bogus entries in the pending reservation tree. To fix this, avoid searching the extent tree when allocating blocks for bigalloc + inline files that are being converted from inline to extent mapped. | ||||
| CVE-2025-14766 | 4 Apple, Google, Linux and 1 more | 5 Macos, Chrome, V8 and 2 more | 2025-12-23 | 8.8 High |
| Out of bounds read and write in V8 in Google Chrome prior to 143.0.7499.147 allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page. (Chromium security severity: High) | ||||
| CVE-2024-38588 | 2 Debian, Linux | 2 Debian Linux, Linux Kernel | 2025-12-23 | 7.8 High |
| In the Linux kernel, the following vulnerability has been resolved: ftrace: Fix possible use-after-free issue in ftrace_location() KASAN reports a bug: BUG: KASAN: use-after-free in ftrace_location+0x90/0x120 Read of size 8 at addr ffff888141d40010 by task insmod/424 CPU: 8 PID: 424 Comm: insmod Tainted: G W 6.9.0-rc2+ [...] Call Trace: <TASK> dump_stack_lvl+0x68/0xa0 print_report+0xcf/0x610 kasan_report+0xb5/0xe0 ftrace_location+0x90/0x120 register_kprobe+0x14b/0xa40 kprobe_init+0x2d/0xff0 [kprobe_example] do_one_initcall+0x8f/0x2d0 do_init_module+0x13a/0x3c0 load_module+0x3082/0x33d0 init_module_from_file+0xd2/0x130 __x64_sys_finit_module+0x306/0x440 do_syscall_64+0x68/0x140 entry_SYSCALL_64_after_hwframe+0x71/0x79 The root cause is that, in lookup_rec(), ftrace record of some address is being searched in ftrace pages of some module, but those ftrace pages at the same time is being freed in ftrace_release_mod() as the corresponding module is being deleted: CPU1 | CPU2 register_kprobes() { | delete_module() { check_kprobe_address_safe() { | arch_check_ftrace_location() { | ftrace_location() { | lookup_rec() // USE! | ftrace_release_mod() // Free! To fix this issue: 1. Hold rcu lock as accessing ftrace pages in ftrace_location_range(); 2. Use ftrace_location_range() instead of lookup_rec() in ftrace_location(); 3. Call synchronize_rcu() before freeing any ftrace pages both in ftrace_process_locs()/ftrace_release_mod()/ftrace_free_mem(). | ||||
| CVE-2024-35867 | 3 Debian, Linux, Redhat | 3 Debian Linux, Linux Kernel, Enterprise Linux | 2025-12-23 | 7.8 High |
| In the Linux kernel, the following vulnerability has been resolved: smb: client: fix potential UAF in cifs_stats_proc_show() Skip sessions that are being teared down (status == SES_EXITING) to avoid UAF. | ||||
| CVE-2024-38545 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 7.8 High |
| In the Linux kernel, the following vulnerability has been resolved: RDMA/hns: Fix UAF for cq async event The refcount of CQ is not protected by locks. When CQ asynchronous events and CQ destruction are concurrent, CQ may have been released, which will cause UAF. Use the xa_lock() to protect the CQ refcount. | ||||
| CVE-2025-68326 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: drm/xe/guc: Fix stack_depot usage Add missing stack_depot_init() call when CONFIG_DRM_XE_DEBUG_GUC is enabled to fix the following call stack: [] BUG: kernel NULL pointer dereference, address: 0000000000000000 [] Workqueue: drm_sched_run_job_work [gpu_sched] [] RIP: 0010:stack_depot_save_flags+0x172/0x870 [] Call Trace: [] <TASK> [] fast_req_track+0x58/0xb0 [xe] (cherry picked from commit 64fdf496a6929a0a194387d2bb5efaf5da2b542f) | ||||
| CVE-2025-68339 | 1 Linux | 1 Linux Kernel | 2025-12-23 | N/A |
| In the Linux kernel, the following vulnerability has been resolved: atm/fore200e: Fix possible data race in fore200e_open() Protect access to fore200e->available_cell_rate with rate_mtx lock in the error handling path of fore200e_open() to prevent a data race. The field fore200e->available_cell_rate is a shared resource used to track available bandwidth. It is concurrently accessed by fore200e_open(), fore200e_close(), and fore200e_change_qos(). In fore200e_open(), the lock rate_mtx is correctly held when subtracting vcc->qos.txtp.max_pcr from available_cell_rate to reserve bandwidth. However, if the subsequent call to fore200e_activate_vcin() fails, the function restores the reserved bandwidth by adding back to available_cell_rate without holding the lock. This introduces a race condition because available_cell_rate is a global device resource shared across all VCCs. If the error path in fore200e_open() executes concurrently with operations like fore200e_close() or fore200e_change_qos() on other VCCs, a read-modify-write race occurs. Specifically, the error path reads the rate without the lock. If another CPU acquires the lock and modifies the rate (e.g., releasing bandwidth in fore200e_close()) between this read and the subsequent write, the error path will overwrite the concurrent update with a stale value. This results in incorrect bandwidth accounting. | ||||
| CVE-2025-68327 | 1 Linux | 1 Linux Kernel | 2025-12-23 | N/A |
| In the Linux kernel, the following vulnerability has been resolved: usb: renesas_usbhs: Fix synchronous external abort on unbind A synchronous external abort occurs on the Renesas RZ/G3S SoC if unbind is executed after the configuration sequence described above: modprobe usb_f_ecm modprobe libcomposite modprobe configfs cd /sys/kernel/config/usb_gadget mkdir -p g1 cd g1 echo "0x1d6b" > idVendor echo "0x0104" > idProduct mkdir -p strings/0x409 echo "0123456789" > strings/0x409/serialnumber echo "Renesas." > strings/0x409/manufacturer echo "Ethernet Gadget" > strings/0x409/product mkdir -p functions/ecm.usb0 mkdir -p configs/c.1 mkdir -p configs/c.1/strings/0x409 echo "ECM" > configs/c.1/strings/0x409/configuration if [ ! -L configs/c.1/ecm.usb0 ]; then ln -s functions/ecm.usb0 configs/c.1 fi echo 11e20000.usb > UDC echo 11e20000.usb > /sys/bus/platform/drivers/renesas_usbhs/unbind The displayed trace is as follows: Internal error: synchronous external abort: 0000000096000010 [#1] SMP CPU: 0 UID: 0 PID: 188 Comm: sh Tainted: G M 6.17.0-rc7-next-20250922-00010-g41050493b2bd #55 PREEMPT Tainted: [M]=MACHINE_CHECK Hardware name: Renesas SMARC EVK version 2 based on r9a08g045s33 (DT) pstate: 604000c5 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : usbhs_sys_function_pullup+0x10/0x40 [renesas_usbhs] lr : usbhsg_update_pullup+0x3c/0x68 [renesas_usbhs] sp : ffff8000838b3920 x29: ffff8000838b3920 x28: ffff00000d585780 x27: 0000000000000000 x26: 0000000000000000 x25: 0000000000000000 x24: ffff00000c3e3810 x23: ffff00000d5e5c80 x22: ffff00000d5e5d40 x21: 0000000000000000 x20: 0000000000000000 x19: ffff00000d5e5c80 x18: 0000000000000020 x17: 2e30303230316531 x16: 312d7968703a7968 x15: 3d454d414e5f4344 x14: 000000000000002c x13: 0000000000000000 x12: 0000000000000000 x11: ffff00000f358f38 x10: ffff00000f358db0 x9 : ffff00000b41f418 x8 : 0101010101010101 x7 : 7f7f7f7f7f7f7f7f x6 : fefefeff6364626d x5 : 8080808000000000 x4 : 000000004b5ccb9d x3 : 0000000000000000 x2 : 0000000000000000 x1 : ffff800083790000 x0 : ffff00000d5e5c80 Call trace: usbhs_sys_function_pullup+0x10/0x40 [renesas_usbhs] (P) usbhsg_pullup+0x4c/0x7c [renesas_usbhs] usb_gadget_disconnect_locked+0x48/0xd4 gadget_unbind_driver+0x44/0x114 device_remove+0x4c/0x80 device_release_driver_internal+0x1c8/0x224 device_release_driver+0x18/0x24 bus_remove_device+0xcc/0x10c device_del+0x14c/0x404 usb_del_gadget+0x88/0xc0 usb_del_gadget_udc+0x18/0x30 usbhs_mod_gadget_remove+0x24/0x44 [renesas_usbhs] usbhs_mod_remove+0x20/0x30 [renesas_usbhs] usbhs_remove+0x98/0xdc [renesas_usbhs] platform_remove+0x20/0x30 device_remove+0x4c/0x80 device_release_driver_internal+0x1c8/0x224 device_driver_detach+0x18/0x24 unbind_store+0xb4/0xb8 drv_attr_store+0x24/0x38 sysfs_kf_write+0x7c/0x94 kernfs_fop_write_iter+0x128/0x1b8 vfs_write+0x2ac/0x350 ksys_write+0x68/0xfc __arm64_sys_write+0x1c/0x28 invoke_syscall+0x48/0x110 el0_svc_common.constprop.0+0xc0/0xe0 do_el0_svc+0x1c/0x28 el0_svc+0x34/0xf0 el0t_64_sync_handler+0xa0/0xe4 el0t_64_sync+0x198/0x19c Code: 7100003f 1a9f07e1 531c6c22 f9400001 (79400021) ---[ end trace 0000000000000000 ]--- note: sh[188] exited with irqs disabled note: sh[188] exited with preempt_count 1 The issue occurs because usbhs_sys_function_pullup(), which accesses the IP registers, is executed after the USBHS clocks have been disabled. The problem is reproducible on the Renesas RZ/G3S SoC starting with the addition of module stop in the clock enable/disable APIs. With module stop functionality enabled, a bus error is expected if a master accesses a module whose clock has been stopped and module stop activated. Disable the IP clocks at the end of remove. | ||||
| CVE-2025-68330 | 1 Linux | 1 Linux Kernel | 2025-12-23 | N/A |
| In the Linux kernel, the following vulnerability has been resolved: iio: accel: bmc150: Fix irq assumption regression The code in bmc150-accel-core.c unconditionally calls bmc150_accel_set_interrupt() in the iio_buffer_setup_ops, such as on the runtime PM resume path giving a kernel splat like this if the device has no interrupts: Unable to handle kernel NULL pointer dereference at virtual address 00000001 when read PC is at bmc150_accel_set_interrupt+0x98/0x194 LR is at __pm_runtime_resume+0x5c/0x64 (...) Call trace: bmc150_accel_set_interrupt from bmc150_accel_buffer_postenable+0x40/0x108 bmc150_accel_buffer_postenable from __iio_update_buffers+0xbe0/0xcbc __iio_update_buffers from enable_store+0x84/0xc8 enable_store from kernfs_fop_write_iter+0x154/0x1b4 This bug seems to have been in the driver since the beginning, but it only manifests recently, I do not know why. Store the IRQ number in the state struct, as this is a common pattern in other drivers, then use this to determine if we have IRQ support or not. | ||||
| CVE-2025-68343 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 7.0 High |
| In the Linux kernel, the following vulnerability has been resolved: can: gs_usb: gs_usb_receive_bulk_callback(): check actual_length before accessing header The driver expects to receive a struct gs_host_frame in gs_usb_receive_bulk_callback(). Use struct_group to describe the header of the struct gs_host_frame and check that we have at least received the header before accessing any members of it. To resubmit the URB, do not dereference the pointer chain "dev->parent->hf_size_rx" but use "parent->hf_size_rx" instead. Since "urb->context" contains "parent", it is always defined, while "dev" is not defined if the URB it too short. | ||||
| CVE-2025-68338 | 1 Linux | 1 Linux Kernel | 2025-12-23 | N/A |
| In the Linux kernel, the following vulnerability has been resolved: net: dsa: microchip: Don't free uninitialized ksz_irq If something goes wrong at setup, ksz_irq_free() can be called on uninitialized ksz_irq (for example when ksz_ptp_irq_setup() fails). It leads to freeing uninitialized IRQ numbers and/or domains. Use dsa_switch_for_each_user_port_continue_reverse() in the error path to iterate only over the fully initialized ports. | ||||
| CVE-2025-68342 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 7.0 High |
| In the Linux kernel, the following vulnerability has been resolved: can: gs_usb: gs_usb_receive_bulk_callback(): check actual_length before accessing data The URB received in gs_usb_receive_bulk_callback() contains a struct gs_host_frame. The length of the data after the header depends on the gs_host_frame hf::flags and the active device features (e.g. time stamping). Introduce a new function gs_usb_get_minimum_length() and check that we have at least received the required amount of data before accessing it. Only copy the data to that skb that has actually been received. [mkl: rename gs_usb_get_minimum_length() -> +gs_usb_get_minimum_rx_length()] | ||||
| CVE-2025-68331 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 7.0 High |
| In the Linux kernel, the following vulnerability has been resolved: usb: uas: fix urb unmapping issue when the uas device is remove during ongoing data transfer When a UAS device is unplugged during data transfer, there is a probability of a system panic occurring. The root cause is an access to an invalid memory address during URB callback handling. Specifically, this happens when the dma_direct_unmap_sg() function is called within the usb_hcd_unmap_urb_for_dma() interface, but the sg->dma_address field is 0 and the sg data structure has already been freed. The SCSI driver sends transfer commands by invoking uas_queuecommand_lck() in uas.c, using the uas_submit_urbs() function to submit requests to USB. Within the uas_submit_urbs() implementation, three URBs (sense_urb, data_urb, and cmd_urb) are sequentially submitted. Device removal may occur at any point during uas_submit_urbs execution, which may result in URB submission failure. However, some URBs might have been successfully submitted before the failure, and uas_submit_urbs will return the -ENODEV error code in this case. The current error handling directly calls scsi_done(). In the SCSI driver, this eventually triggers scsi_complete() to invoke scsi_end_request() for releasing the sgtable. The successfully submitted URBs, when being unlinked to giveback, call usb_hcd_unmap_urb_for_dma() in hcd.c, leading to exceptions during sg unmapping operations since the sg data structure has already been freed. This patch modifies the error condition check in the uas_submit_urbs() function. When a UAS device is removed but one or more URBs have already been successfully submitted to USB, it avoids immediately invoking scsi_done() and save the cmnd to devinfo->cmnd array. If the successfully submitted URBs is completed before devinfo->resetting being set, then the scsi_done() function will be called within uas_try_complete() after all pending URB operations are finalized. Otherwise, the scsi_done() function will be called within uas_zap_pending(), which is executed after usb_kill_anchored_urbs(). The error handling only takes effect when uas_queuecommand_lck() calls uas_submit_urbs() and returns the error value -ENODEV . In this case, the device is disconnected, and the flow proceeds to uas_disconnect(), where uas_zap_pending() is invoked to call uas_try_complete(). | ||||
| CVE-2025-68337 | 1 Linux | 1 Linux Kernel | 2025-12-23 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: jbd2: avoid bug_on in jbd2_journal_get_create_access() when file system corrupted There's issue when file system corrupted: ------------[ cut here ]------------ kernel BUG at fs/jbd2/transaction.c:1289! Oops: invalid opcode: 0000 [#1] SMP KASAN PTI CPU: 5 UID: 0 PID: 2031 Comm: mkdir Not tainted 6.18.0-rc1-next RIP: 0010:jbd2_journal_get_create_access+0x3b6/0x4d0 RSP: 0018:ffff888117aafa30 EFLAGS: 00010202 RAX: 0000000000000000 RBX: ffff88811a86b000 RCX: ffffffff89a63534 RDX: 1ffff110200ec602 RSI: 0000000000000004 RDI: ffff888100763010 RBP: ffff888100763000 R08: 0000000000000001 R09: ffff888100763028 R10: 0000000000000003 R11: 0000000000000000 R12: 0000000000000000 R13: ffff88812c432000 R14: ffff88812c608000 R15: ffff888120bfc000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f91d6970c99 CR3: 00000001159c4000 CR4: 00000000000006f0 Call Trace: <TASK> __ext4_journal_get_create_access+0x42/0x170 ext4_getblk+0x319/0x6f0 ext4_bread+0x11/0x100 ext4_append+0x1e6/0x4a0 ext4_init_new_dir+0x145/0x1d0 ext4_mkdir+0x326/0x920 vfs_mkdir+0x45c/0x740 do_mkdirat+0x234/0x2f0 __x64_sys_mkdir+0xd6/0x120 do_syscall_64+0x5f/0xfa0 entry_SYSCALL_64_after_hwframe+0x76/0x7e The above issue occurs with us in errors=continue mode when accompanied by storage failures. There have been many inconsistencies in the file system data. In the case of file system data inconsistency, for example, if the block bitmap of a referenced block is not set, it can lead to the situation where a block being committed is allocated and used again. As a result, the following condition will not be satisfied then trigger BUG_ON. Of course, it is entirely possible to construct a problematic image that can trigger this BUG_ON through specific operations. In fact, I have constructed such an image and easily reproduced this issue. Therefore, J_ASSERT() holds true only under ideal conditions, but it may not necessarily be satisfied in exceptional scenarios. Using J_ASSERT() directly in abnormal situations would cause the system to crash, which is clearly not what we want. So here we directly trigger a JBD abort instead of immediately invoking BUG_ON. | ||||