Quantcast
Viewing all 12273 articles
Browse latest View live

How is the Linux kernel tested ?

How do the Linux kernel developers test their code locally and after they have it committed? Do they use some kind of unit testing, build automation? test plans?


Can I get information from other filesystems from overlayfs?

Can you solve the problem of not knowing the inode information when modifying the path of upperdir in overlayfs?

After changing the upperdir path:

root@bcd0d4e391f8:~/myDir# ls -i   
ls: cannot access 'ssd.c': No such file or directory  
ls: cannot access 'ssd.abc': No such file or directory  
? ssd.abc  ? ssd.c

*The upperdir path was modified by changing the overlayfs code.


*Original code

static int ovl_create_upper(struct dentry *dentry, struct inode *inode,
            struct ovl_cattr *attr){    

    struct dentry *upperdir = ovl_dentry_upper(dentry->d_parent);
    struct inode *udir = upperdir->d_inode;
    struct dentry *newdentry;
    int err;

    newdentry = ovl_create_real(udir,
                lookup_one_len(dentry->d_name.name,
                       upperdir,
                       dentry->d_name.len),
                attr);

*Fix code example

static int ovl_create_upper(struct dentry *dentry, struct inode *inode,
            struct ovl_cattr *attr){
     struct dentry *upperdir = ovl_dentry_upper(dentry->d_parent);
     struct inode *udir = upperdir->d_inode;
     struct dentry *newdentry;
     int err;
     extern struct qsh_metadata qsh_mt; //HOON  

     newdentry = ovl_create_real(qsh_mt.qsh_dentry->d_inode,
            lookup_one_len(dentry->d_name.name,
                    qsh_mt.qsh_dentry,
                    dentry->d_name.len),
                attr);

*In the above code, I replaced the upperdir part with the disk I want and the mounted dentry object. *extern struct qsh_metadata qsh_mt; => Code I added

ex) struct dentry upperdir = qsh_dentry;
docker base mount : /dev/sda
qsh_dentry : mount /dev/sdb

How to add remoteproc node into device tree of Zynq-7000 Based RedPitaya board

I am trying to run the RedPitaya in AMP mode.

I didn't find much information on the remoteproc driver and what kind of entries it need in the device tree source. I found this document and added it to the device tree but hadn't so much luck with making the examples work. Also I found different variations in the device tree node for remoteproc from different sources which is pretty confusing.

Can someone point me in the direction where I can read more about the AMP feature of the Arm-Cortex-A9 and can some explain some of the entries in the dts node of remoteproc.

Docker daemon service crash very often on worker node

Docker service stops very often on one of my remote worker node.

I am not able to figure out why this is happening?

OS: Ubuntu 19.04

** Log: journelctl -xe**

Mar 12 10:43:44 machine1 systemd-networkd[434]: vethc827a75: Gained IPv6LL
Mar 12 10:43:44 machine1 kernel: docker_gwbridge: port 2(veth7e595dc) entered disabled state
Mar 12 10:43:45 machine1 kernel: docker_gwbridge: port 3(veth7574e8b) entered blocking state
Mar 12 10:43:45 machine1 kernel: docker_gwbridge: port 3(veth7574e8b) entered forwarding state
Mar 12 10:43:45 machine1 kernel: veth2: renamed from veth3b5a70d
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered blocking state
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered disabled state
Mar 12 10:43:45 machine1 kernel: device veth2 entered promiscuous mode
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered blocking state
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered forwarding state
Mar 12 10:43:45 machine1 kernel: br0: port 3(veth1) entered disabled state
Mar 12 10:43:45 machine1 kernel: docker_gwbridge: port 3(veth7574e8b) entered disabled state
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered disabled state
Mar 12 10:43:45 machine1 kernel: docker_gwbridge: port 4(vethcb2c2a4) entered blocking state
Mar 12 10:43:45 machine1 kernel: docker_gwbridge: port 4(vethcb2c2a4) entered disabled state
Mar 12 10:43:45 machine1 kernel: device vethcb2c2a4 entered promiscuous mode
Mar 12 10:43:45 machine1 systemd-udevd[2887]: Could not generate persistent MAC address for vethc361b7b: No such file or directory
Mar 12 10:43:45 machine1 systemd-udevd[2890]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 12 10:43:45 machine1 systemd-udevd[2890]: Could not generate persistent MAC address for veth7574e8b: No such file or directory
Mar 12 10:43:45 machine1 kernel: veth2: renamed from veth6691f49
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered blocking state
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered disabled state
Mar 12 10:43:45 machine1 kernel: device veth2 entered promiscuous mode
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered blocking state
Mar 12 10:43:45 machine1 kernel: br0: port 4(veth2) entered forwarding state
Mar 12 10:43:45 machine1 systemd-udevd[2937]: link_config: could not get ethtool features for vethbf19a70
Mar 12 10:43:45 machine1 systemd-udevd[2937]: Could not set offload features of vethbf19a70: No such device
Mar 12 10:43:45 machine1 systemd-udevd[2889]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 12 10:43:45 machine1 systemd-udevd[2891]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 12 10:43:45 machine1 systemd-udevd[2891]: link_config: could not get ethtool features for veth3b5a70d
Mar 12 10:43:45 machine1 systemd-udevd[2891]: Could not set offload features of veth3b5a70d: No such device
Mar 12 10:43:45 machine1 systemd-udevd[2885]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 12 10:43:45 machine1 systemd-udevd[2885]: Could not generate persistent MAC address for veth2100695: No such file or directory
Mar 12 10:43:45 machine1 systemd-udevd[2884]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.

hooking on Centos 8 filed and rebooting the system

I wrote a simple hook that changes the pointer of the sys_call_table and points the execve function to my function the hook work just fine on ubuntu in kernel 4.15 but when I tried it on centos 8 system in kernel 4.18 it filed and rebooting the system. how can it be done in new kernels?

this is my code:

static int lkm_example_init(void)
{
    write_cr0(read_cr0() & (~ 0x10000));

    sys_call_table = (void*)kallsyms_lookup_name("sys_call_table");
    original_call = sys_call_table[__NR_kill];
    sys_call_table[__NR_kill] = our_sys_kill;

    execl = sys_call_table[__NR_execve];

    sys_call_table[__NR_execve] = our_execl;
}

How to send custom size payload in PCIe TLP?

In my PCIe driver, after pci_ioremap_bar() to map BARs to memory spaces, I can use:

    unsigned int ioread8(void __iomem *addr);
    unsigned int ioread16(void __iomem *addr);
    unsigned int ioread32(void __iomem *addr);
    void iowrite8(u8 value, void __iomem *addr);
    void iowrite16(u16 value, void __iomem *addr);
    void iowrite32(u32 value, void __iomem *addr);
  • Each call generates a single MRd/MWr TLP with a maximum payload of 4B/32b.

  • My PCIe device supports TLP payload up to 1024 Bytes.

How can I take advantage of that and being able to send more bytes in a single TLP from CPU to my device?

My question puts aside the DMA. Simply PIO from the host/CPU. From my device, I can send 1KB TLP to dma_alloc_coherent() memory without any problem. I'd like to do the same way without using descriptors.

I know it isn't possible, but I read somewhere recent CPUs might have new features that would allow me to send more than one DWORD.

How to allocate large contiguous, memory regions in Linux

Yes, I will ultimately be using this for DMA but lets leave coherency aside for the moment. I have 64 bit BAR registers, therefore, AFAIK, all of RAM (e.g. higher than 4G) is available for DMA.

I am looking for about 64MB of contiguous RAM. Yes, that's a lot.

Ubuntu 16 and 18 have CONFIG_CMA=y but CONFIG_DMA_CMA is not set at kernel compile time.

I note that if both were set (at Kernel build time) I could simply call dma_alloc_coherent, however, for logistical reasons, it is undesirable to recompile the kernel.

The machines will always have at least 32GB of RAM, do not run anything RAM intensive, and the kernel module will load shortly after boot before RAM becomes significantly fragmented and, AFAIK, nothing else is using the CMA.

I have set the kernel parameter CMA=1G. (and have tried 256M and 512M)

# dmesg | grep cma
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.4.170 root=UUID=2b25933c-e10c-4833-b5b2-92e9d3a33fec ro cma=1G
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.4.170 root=UUID=2b25933c-e10c-4833-b5b2-92e9d3a33fec ro cma=1G
[    0.000000] Memory: 65612056K/67073924K available (8604K kernel code, 1332K rwdata, 3972K rodata, 1484K init, 1316K bss, 1461868K reserved, 0K cma-reserved)

I have tried alloc_pages(GFP_KERNEL | __GFP_HIGHMEM, order), no joy.

And finally the actual question: How does one get large contiguous blocks from the CMA? Everything I have found online suggests the use of dma_alloc_coherent but I know this only works with CONFIG_CMA=y and CONFIG_DMA_CMA=yes.

The module source, tim.c

#include <linux/module.h>       /* Needed by all modules */
#include <linux/kernel.h>       /* Needed for KERN_INFO */
#include <linux/init.h>
#include <linux/mm.h>
#include <linux/gfp.h>
unsigned long big;
const int order = 15;
static int __init tim_init(void)
{
        printk(KERN_INFO "Hello Tim!\n");
        big = __get_free_pages(GFP_KERNEL | __GFP_HIGHMEM, order);
        printk(KERN_NOTICE "big = %lx\n", big);
        if (!big)
                return -EIO; // AT&T

        return 0; // success
}

static void __exit tim_exit(void)
{
        free_pages(big, order);
        printk(KERN_INFO "Tim says, Goodbye world\n");
}

module_init(tim_init);
module_exit(tim_exit);
MODULE_LICENSE("GPL");

Inserting the module yields...

# insmod tim.ko
insmod: ERROR: could not insert module tim.ko: Input/output error
# dmesg | tail -n 33

[  176.137053] Hello Tim!
[  176.137056] ------------[ cut here ]------------
[  176.137062] WARNING: CPU: 4 PID: 2829 at mm/page_alloc.c:3198 __alloc_pages_nodemask+0xd14/0xe00()
[  176.137063] Modules linked in: tim(OE+) xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables x_tables configfs vxlan ip6_udp_tunnel udp_tunnel uio pf_ring(OE) x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm mei_me mei irqbypass sb_edac ioatdma edac_core shpchp serio_raw input_leds lpc_ich dca acpi_pad 8250_fintek mac_hid ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid0 multipath linear
[  176.137094]  hid_generic usbhid crct10dif_pclmul crc32_pclmul ghash_clmulni_intel e1000e aesni_intel raid1 aes_x86_64 isci lrw libsas ahci gf128mul ptp glue_helper ablk_helper cryptd psmouse hid libahci scsi_transport_sas pps_core wmi fjes
[  176.137105] CPU: 4 PID: 2829 Comm: insmod Tainted: G           OE   4.4.170 #1
[  176.137106] Hardware name: Supermicro X9SRL-F/X9SRL-F, BIOS 3.3 11/13/2018
[  176.137108]  0000000000000286 8ba89d23429d5749 ffff88100f5cba90 ffffffff8140a061
[  176.137110]  0000000000000000 ffffffff81cd89dd ffff88100f5cbac8 ffffffff810852d2
[  176.137112]  ffffffff821da620 0000000000000000 000000000000000f 000000000000000f
[  176.137113] Call Trace:
[  176.137118]  [<ffffffff8140a061>] dump_stack+0x63/0x82
[  176.137121]  [<ffffffff810852d2>] warn_slowpath_common+0x82/0xc0
[  176.137123]  [<ffffffff8108541a>] warn_slowpath_null+0x1a/0x20
[  176.137125]  [<ffffffff811a2504>] __alloc_pages_nodemask+0xd14/0xe00
[  176.137128]  [<ffffffff810ddaef>] ? msg_print_text+0xdf/0x1a0
[  176.137132]  [<ffffffff8117bc3e>] ? irq_work_queue+0x8e/0xa0
[  176.137133]  [<ffffffff810de04f>] ? console_unlock+0x20f/0x550
[  176.137137]  [<ffffffff811edbdc>] alloc_pages_current+0x8c/0x110
[  176.137139]  [<ffffffffc0024000>] ? 0xffffffffc0024000
[  176.137141]  [<ffffffff8119ca2e>] __get_free_pages+0xe/0x40
[  176.137143]  [<ffffffffc0024020>] tim_init+0x20/0x1000 [tim]
[  176.137146]  [<ffffffff81002125>] do_one_initcall+0xb5/0x200
[  176.137149]  [<ffffffff811f90c5>] ? kmem_cache_alloc_trace+0x185/0x1f0
[  176.137151]  [<ffffffff81196eb5>] do_init_module+0x5f/0x1cf
[  176.137154]  [<ffffffff81111b05>] load_module+0x22e5/0x2960
[  176.137156]  [<ffffffff8110e080>] ? __symbol_put+0x60/0x60
[  176.137159]  [<ffffffff81221710>] ? kernel_read+0x50/0x80
[  176.137161]  [<ffffffff811123c4>] SYSC_finit_module+0xb4/0xe0
[  176.137163]  [<ffffffff8111240e>] SyS_finit_module+0xe/0x10
[  176.137167]  [<ffffffff8186179b>] entry_SYSCALL_64_fastpath+0x22/0xcb
[  176.137169] ---[ end trace 6aa0b905b8418c7b ]---
[  176.137170] big = 0

curiously, trying it again yields...

# insmod tim.ko
insmod: ERROR: could not insert module tim.ko: Input/output error
...and dmesg just shows:

[  302.068396] Hello Tim!
[  302.068398] big = 0

why no stack dump the second (and subsequent) try(s)?

Implementing LSM hook bprm_check_security

Recently, I am working on developing an Application Whitelisting solution for embedded linux based on the Linux Security Framework. The main focus of my LSM is implementing the bprm_check_security hook, invoked, when a program executing in the user-space (we do not consider kernel processes). This hook is given a pointer of type "struct linux_binprm *bprm". This pointer includes a file pointer (including the executable file of the executed program), and a char pointer (including the name of the executed program).

Our application whitelisting solution is based on hash calculation. Accordingly, in my LSM, I use the file pointer(contained in the bprm pointer) to calculate a new hash value and store that value together with the filename (in the bprm pointer) as an entry in a list.

However, during the linux boot (before the /sbin/init is executed), there are missmatches between the filename, and the file pointer. For instance, in one of first executing programs, the filename in the bprm pointer is "/bin/cat", however, the file pointer in the same bprm pointer is not the actual file of /bin/cat, rather busybox.

After researching for a long time, I found out, that those files are executed by busybox to create an initial initrd, which consequently create the actual rootfs, and all of those files have the magic number RAMFS_MAGIC (stored in inode->i_sb->s_magic). So I used this number to filter those processes, however, I am not sure, whether it would be the right way or not. I would appreciate any helps.

It is to be noted that, I use the file pointer (included in the bprm pointer) to calculate the hash values, in other words, I dont read files depending on their filename or filepath from the userspace.

thanks.

/include/linux/binfmts.h
struct linux_binprm {
struct file * file;
const char * filename;  /* Name of binary as seen by procps */
};

Unexpected result when using container_of macro (Linux kernel)

I have a problem with using of container_of macro in the Linux kernel. I have the following code

#define container_of(ptr, type, member) ({ \
        const typeof( ((type *)0)->member) *__mptr = (ptr); \
        (type *)( (char *)__mptr - offsetof(type, member) );})


struct list_head
{
    struct list_head *prev;
    struct list_head *next;
};


struct fox
{
    unsigned long tail_length;
    unsigned long weight;
    unsigned int is_fantastic;

    /*Make this struct a node of the linked list*/
    struct list_head list;
};

I want to make fox structure a node of the linked list.

int main(void)
{
    struct list_head node_first = {.prev=NULL, .next=NULL};
    struct fox first_f = {.tail_length=3, .weight=4, .is_fantastic=0, .list=node_first};

    struct fox *second_f; 
    second_f = container_of(&node_first, struct fox, list);
    printf("%lu\n", second_f->tail_length);
    return 0;
}

I expected that I will see 3 in the terminal, since second_l points to the firstf_f structure, but I have 140250641491552 (some "random" value from the memory, as a think).

gpiod - use labels in devicetree

I want to use libgpiod to control a few gpios via userspace on a custom board. I have an i.MX6UL processor, which has hundreds of pins, I'll use only 8 of them (as gpios).

I read about the libgpiod as it is replacing the old sysfs API, and I'm happy that you can specify labels for each GPIO. The gpio-block of the processor looks like the following code block and has already the gpio-controller property set. (Taken from 4.14 Kernel)


            gpio2: gpio@20a0000 {
                compatible = "fsl,imx6ul-gpio", "fsl,imx35-gpio";
                reg = <0x020a0000 0x4000>;
                interrupts = <GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>,
                         <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
                gpio-controller;
                #gpio-cells = <2>;
                interrupt-controller;
                #interrupt-cells = <2>;
                gpio-ranges = <&iomuxc 0 49 16>, <&iomuxc 16 111 6>;
            };

I want to use a single pin of this controller, so I added the following block:

&gpio2 {
    resetl0 {
        //gpio-hog;
        output-high;
        line-name = "COBO0_ResetL";
        gpios = <15 1>;
    };
};

Without the gpio-hog property, the gpioinfo tool is unable to show me the labels, same if i omit the output-high/low. With the property, the label is correctly displayed, but the gpio is marked as used, so I cannot control from userspace. (Device or resource busy)

So in short: I need a way to set a label in devicetree, which I'm able to read from userspace and to control the gpios. I already saw the gpio-line-names in the RPi devicetree, but I don't want to specify the whole bank as NC, when using only one. Is it possible with gpiod? How?

How are percpu pointers implemented in the Linux kernel?

On multiprocessor, each core can have its own variables. I thought they are different variables in different addresses, although they are in same process and have the same name.

But I am wondering, how does the kernel implement this? Does it dispense a piece of memory to deposit all the percpu pointers, and every time it redirects the pointer to certain address with shift or something?

Clone a file in linux kernel module

I'm having some problem with a linux module, I want to clone a file from a file descriptor. I tried using

vfs_clone_file_range

but i receive EOPNOTSUPP error. So I tried to use vfs_copy_file_range, and the copy works correctly, but I need to have also the same flags of the original one, but in this way, even if the original is open with O_APPEND the pointer of the copy is always at the begin of the file.

This is my code:

//The file descriptor is taken correctly and it works
original_filp = fcheck(o_fd);
copy_filp = filp_open(addr, O_CREAT | O_RDWR  , 0644);
vfs_copy_file_range(original_filp, 0, copy_filp, 0, i_size_read(original_filp->f_inode), 0);

The content is the right but the pointer, as I said is at the begin even with the O_APPEND flag, so I should move the pointer explicitly. I also tried to add this line, but without results:

copy_filp->f_pos = original_filp->f_pos;

I really have no idea what to change in order to make it working. Thank you in advance for your help

Changing thread real time scheduling policy fails: CONFIG_RT_GROUP_SCHED=y

My apologies if I'm posting this here instead of super user.

I was trying to run docker inside real-time group and I came across enabling cgroups - CONFIG_RT_GROUP_SCHED in the kernel to run real-time docker applications (here: https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler)

I configured my kernel to enable the FIFO/RR flags and verified it (Available here: How to enable CONFIG_RT_GROUP_SCHED in Ubuntu to make it RT)

I believe my system is properly scheduled now because I'm able to run the resource limited docker on this system which accesses the cgroups with the following command:

$ docker run -it --cpu-rt-runtime=950000 \
                      --ulimit rtprio=99 \
                      --cap-add=sys_nice \
                      debian:jessie

I went ahead and tried to explore more features of RT system. I've this CPP code to assign RT priority scheduling to threads. This code basically tries to set the SCHED_FIFO priority to a thread and prints if the kernel allowed it to set the priority or not.

#include <iostream>
#include <pthread.h>
#include <sched.h>

using namespace std;

void set_realtime_priority() {
     int ret;
     // We'll operate on the currently running thread.
     pthread_t this_thread = pthread_self();
     // struct sched_param is used to store the scheduling priority
     struct sched_param params;

     // We'll set the priority to the maximum.
     params.sched_priority = sched_get_priority_max(SCHED_FIFO);
     std::cout << "Trying to set thread realtime prio = "<< params.sched_priority << std::endl;

     // Attempt to set thread real-time priority to the SCHED_FIFO policy
     ret = pthread_setschedparam(this_thread, SCHED_FIFO, &params);
     if (ret != 0) {
         // Print the error
         std::cout << "Unsuccessful in setting thread realtime prio"<< std::endl;
         return;     
     }
     // Now verify the change in thread priority
     int policy = 0;
     ret = pthread_getschedparam(this_thread, &policy, &params);
     if (ret != 0) {
         std::cout << "Couldn't retrieve real-time scheduling paramers"<< std::endl;
         return;
     }

     // Check the correct policy was applied
     if(policy != SCHED_FIFO) {
         std::cout << "Scheduling is NOT SCHED_FIFO!"<< std::endl;
     } else {
         std::cout << "SCHED_FIFO OK"<< std::endl;
     }

     // Print thread scheduling priority
     std::cout << "Thread priority is "<< params.sched_priority << std::endl; 
}

int main(){
set_realtime_priority();
return 0;
}

I've verified this code on a generic ubuntu/fedora and RT patched CentOS system. All of these systems allow the code to set the priority. Surprisingly its the CONFIG_RT_GROUP_SCHED=y configured kernel which doesn't allow me to set the priority policy. Similarly it also doesn't allow cyclictest to run

//install cyclictest by following

$ sudo apt-get install rt-tests
$ sudo cyclictest

I didn't understand this anomalous behavior. Does enabling the CONFIG_RT_GROUP_SCHED somehow block me from changing scheduling policies?

how to change kernel optimization level?

I use kgdb for kernel debugging and I have a problem with printing values.

I wanted to get information about parameters of function. (ex (gdb) p *page) But all I got was optimized out

I found that it happens due to compiler optimization. And they says changing kernel optimization level to Og would help.

But I don't know how to change it.

I changed toplevel Makefile like below.

703 ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
704 KBUILD_CFLAGS   += -Os
705 else
706 KBUILD_CFLAGS   += -Og                                                                                                                                                                                          
707 endif

But, I got error during make.

scripts/Makefile.build:497: recipe for target 'fs/cifs' failed

Error: conflicting types when trying to make a simple syscall

I'm brand new to Linux programming and I'm trying to implement a simple system call loosely following this guide: https://medium.com/anubhav-shrimal/adding-a-hello-world-system-call-to-linux-kernel-dad32875872. In my linux kernel directory, I created a new directory called my_syscall. Within that directory, I created my_syscall.c. Here is my_syscall.c

#include <linux/syscalls.h>
#include <linux/kernel.h>

asmlinkage long sys_my_syscall(int i) {
   prink(KERN_INFO "This is the system call.");
   return(0);
}

I then created a Makefile in the my_syscall directory with a single line:

obj-y := my_syscall.o

I then edited this line in the Makefile in the kernel directory to be:

core-y         += kernel/ certs/ mm/ fs/ ipc/ security/ crypto/ block/ my_syscall/

Then, in the directory linux-5.4.15/arch/x86/entry/syscalls, I edited the syscall_64.tbl to include the following line at the very end:

548     64         my_syscall          sys_my_syscall

Finally, in the directory linux-5.4.15/include/linux, I edited the syscalls.h file to include this line before the #endif:

asmlinkage long sys_my_syscall(int i);

Now, when I run the command sudo make, I run into the following error soon after:

./arch/x86/include/generated/asm/syscalls_64.h:2664:19: error: conflicting types for 'sys_my_syscall'
__SYSCALL_64(548, sys_my_syscall, )

arch/x86/entry/syscall_64.c:18:60: note: in definition of macro '__SYSCALL-64'
  #define __SYSCALL_64(nr, sym, qual) extern asmlinkage long sym(const struct pt_regs *);

In file included from arch/x86/entry/syscall_64.c:7:0:
./include/linux/syscalls.h:1423:17: note: previous declaration of 'sys_my_syscall' was here
 asmlinkage long sys_my_syscall(int i);
                 ^
make[3]: *** [arch/x86/entry/syscall_64.o] Error 1
make[2]: *** [arch/x86/entry] Error 2
make[1]: *** [arch/x86] Error 2
make: *** [sub-make] Error 2

I have no idea how to approach this error. With a conflicting types error, I would think I declared the syscall differently in someplace, but in both my_syscall.c and the syscalls.h files, the declaration is the same. These were the only two files where the syscall is declared, but it is also named within syscall_64.tbl and it seems like this is where linux is trying to point me towards. However, I don't see what's wrong with how I declared it in the table as I followed the guide directly. Any help with this would be greatly appreciated!

Info:

Kernel version: 5.4.15

Linux Distribution: Ubuntu 14


The postinstall intercept hook 'update_gio_module_cache' failed

I am building core-image-minimal for warrior branch. My device has atom processor so I have changed nehalem to atom in tune-corei7.inc file. My machine is set to intel-corei7-64. While generating core-image-minimal, I am facing following error:

NOTE: Installing complementary packages ... 
NOTE: Running ['oe-pkgdata-util', '-p', '/home/panther2/warrior/build_panther1/tmp/pkgdata/panther1', 'glob', '/tmp/installed-pkgs03hhi936', ''] 
NOTE: Running intercept scripts:
NOTE: > Executing update_gio_module_cache intercept ... 
NOTE: Exit code 1. Output:
+ [ True = False -a qemuwrapper-cross != nativesdk-qemuwrapper-cross ]
+ qemu-x86_64 -r 3.2.0 -cpu atom,check=false -E LD_LIBRARY_PATH=/home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/usr/lib:/home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/lib -L /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/usr/libexec/gio-querymodules /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/rootfs/usr/lib/gio/modules/
unable to find CPU model 'atom'

ERROR: The postinstall intercept hook 'update_gio_module_cache' failed, details in /home/panther2/warrior/build_panther1/tmp/work/panther1-poky-linux/core-image-minimal/1.0-r0/temp/log.do_rootfs
ERROR: 
DEBUG: Python function do_rootfs finished
ERROR: Function failed: do_rootfs 

Any help here?

Thanks in advance..!

Edit : Attaching "tune-corei7.inc" file

# Settings for the GCC(1) cpu-type "atom":
#
#     Intel atom CPU with 64-bit extensions, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1,
#     SSE4.2 and POPCNT instruction set support.
#
# This tune is recommended for Intel atom and Silvermont (e.g. Bay Trail) CPUs
# (and beyond).
#
DEFAULTTUNE ?= "corei7-64"

# Include the previous tune to pull in PACKAGE_EXTRA_ARCHS
require conf/machine/include/tune-atom.inc

# Extra tune features
TUNEVALID[corei7] = "Enable corei7 specific processor optimizations"
TUNE_CCARGS .= "${@bb.utils.contains('TUNE_FEATURES', 'corei7', ' -march=atom -mtune=generic -mfpmath=sse -msse4.2', '', d)}"

# Extra tune selections
AVAILTUNES += "corei7-32"
TUNE_FEATURES_tune-corei7-32 = "${TUNE_FEATURES_tune-x86} corei7"
BASE_LIB_tune-corei7-32 = "lib"
TUNE_PKGARCH_tune-corei7-32 = "corei7-32"
PACKAGE_EXTRA_ARCHS_tune-corei7-32 = "${PACKAGE_EXTRA_ARCHS_tune-atom-32} corei7-32"
QEMU_EXTRAOPTIONS_corei7-32 = " -cpu nehalem,check=false"

AVAILTUNES += "corei7-64"
TUNE_FEATURES_tune-corei7-64 = "${TUNE_FEATURES_tune-x86-64} corei7"
BASE_LIB_tune-corei7-64 = "lib64"
TUNE_PKGARCH_tune-corei7-64 = "corei7-64"
PACKAGE_EXTRA_ARCHS_tune-corei7-64 = "${PACKAGE_EXTRA_ARCHS_tune-atom-64} corei7-64"
QEMU_EXTRAOPTIONS_corei7-64 = " -cpu nehalem,check=false"

AVAILTUNES += "corei7-64-x32"
TUNE_FEATURES_tune-corei7-64-x32 = "${TUNE_FEATURES_tune-x86-64-x32} corei7"
BASE_LIB_tune-corei7-64-x32 = "libx32"
TUNE_PKGARCH_tune-corei7-64-x32 = "corei7-64-x32"
PACKAGE_EXTRA_ARCHS_tune-corei7-64-x32 = "${PACKAGE_EXTRA_ARCHS_tune-atom-64-x32} corei7-64-x32"
QEMU_EXTRAOPTIONS_corei7-64-x32 = " -cpu nehalem,check=false"

Register multiple SPI ports (devices) to single SPI Platform Driver?

I'm developing a Linux spi driver to handle communication via SPI port. My SoC offers three spi modules (which I understand it as ports) called ecspi1/ecspi2/ecspi3. I have needs to send two kinds of data using ecspi1 and ecspi2.

I've implemented a driver which registered to spi driver and already successfully handled ecspi1 by add below to dts and driver:

[ dts ]

&ecspi1 {
    status = "okay";

    fpga1: lfe5u12f6bg256i@0 {
        reg = <0>;
        compatible = "lattice,lfe5u12f6bg256i";
        spi-max-frequency = <10000000>;
    };
};

[ driver ]

static const struct of_device_id fpga_spi_of_match[] = {
    { .compatible = "lattice,lfe5u12f6bg256i", },
    {},
};

I've tried to add ecspi2 to driver with below modifications. However, driver's probed twice on boot and failed at the second probe period.

[ dts ]

&ecspi1 {
    status = "okay";

    fpga1: lfe5u12f6bg256i@0 {
        reg = <0>;
        compatible = "lattice,lfe5u12f6bg256i";
        spi-max-frequency = <10000000>;
    };
 };

+&ecspi2 {
+   status = "okay";
+
+   fpga0: fpga_fw@0 {
+       reg = <0>;
+       compatible = "fpga_fw,lfe5u12f6bg256i";
+       spi-max-frequency = <10000000>;
+   };
+};

[ driver ]

 static const struct of_device_id fpga_spi_of_match[] = {
    { .compatible = "fpga_fw,lfe5u12f6bg256i", },
+   { .compatible = "lattice,lfe5u12f6bg256i", },
    {},
 };

Does anyone know how to handle multiple SPI ports (devices) in single driver?

What is the difference between kmemdup_nul() and kstrndup() in Linux?

They are similar functions. But what is the exact difference between them? The Linux documents

Note: Use kmemdup_nul() instead if the size is known exactly.

Why the SCHED_NORMAL process can affect the latency of RT process?

I have tested the rt process's latency by cyclictest. And found that when I launch dozens of disturbing processes(SCHED_NORMAL) the rt process's latency will get bigger. I can't explain this.

The test command is: ./cyclictest -p80 -n

The priority of the rt process is 80.

The disturbing process is simple:

while(1) {
    srand(time(0));
    while(j++ < rand()%1000000);
    usleep(10);
}

In Linux kernel, the SCHED_FIFO have high priority to be scheduled, no matter how many SCHED_NORMAL processes are there, as long as the RT process is runnable, it will be executed immediately.

Can anyone explain why launching a dozens of normal processes can affect RT process's latency?

Correct way to join two double linked list

In the Linux kernel source, the list_splice is implemented with __list_splice:

static inline void __list_splice(const struct list_head *list,
                                 struct list_head *prev,
                                 struct list_head *next)
{
        struct list_head *first = list->next; // Why?
        struct list_head *last = list->prev;

        first->prev = prev;
        prev->next = first;

        last->next = next;
        next->prev = last;
}

Isn't the list already pointing to the head of a linked list? Why do we need to fetch list->next instead?

Viewing all 12273 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>