From 28405d8d9ce05f5bd869ef8b48da5086f9527d73 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Mon, 5 Jan 2009 17:14:31 -0700 Subject: async_tx, dmaengine: document channel allocation and api rework "Wouldn't it be better if the dmaengine layer made sure it didn't pass the same channel several times to a client? I mean, you seem concerned that the memcpy() API should be transparent and easy to use, but the whole registration interface is just ridiculously complicated..." - Haavard The dmaengine and async_tx registration/allocation interface is indeed needlessly complicated. This redesign has the following goals: 1/ Simplify reference counting: dma channels are not something one would expect to be hotplugged, it should be an exceptional event handled by drivers not something clients should be mandated to handle in a callback. The common case channel removal event is 'rmmod ', which for simplicity should be disallowed if the channel is in use. 2/ Add an interface for requesting exclusive access to a channel suitable to device-to-memory users. 3/ Convert all memory-to-memory users over to a common allocator, the goal here is to not have competing channel allocation schemes. The only competition should be between device-to-memory exclusive allocations and the memory-to-memory usage case where channels are shared between multiple "clients". Cc: Haavard Skinnemoen Cc: Neil Brown Cc: Jeff Garzik Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/Documentation/crypto/async-tx-api.txt b/Documentation/crypto/async-tx-api.txt index c1e9545..9f59fcb 100644 --- a/Documentation/crypto/async-tx-api.txt +++ b/Documentation/crypto/async-tx-api.txt @@ -13,9 +13,9 @@ 3.6 Constraints 3.7 Example -4 DRIVER DEVELOPER NOTES +4 DMAENGINE DRIVER DEVELOPER NOTES 4.1 Conformance points -4.2 "My application needs finer control of hardware channels" +4.2 "My application needs exclusive control of hardware channels" 5 SOURCE @@ -150,6 +150,7 @@ ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more implementation examples. 4 DRIVER DEVELOPMENT NOTES + 4.1 Conformance points: There are a few conformance points required in dmaengine drivers to accommodate assumptions made by applications using the async_tx API: @@ -158,58 +159,49 @@ accommodate assumptions made by applications using the async_tx API: 3/ Use async_tx_run_dependencies() in the descriptor clean up path to handle submission of dependent operations -4.2 "My application needs finer control of hardware channels" -This requirement seems to arise from cases where a DMA engine driver is -trying to support device-to-memory DMA. The dmaengine and async_tx -implementations were designed for offloading memory-to-memory -operations; however, there are some capabilities of the dmaengine layer -that can be used for platform-specific channel management. -Platform-specific constraints can be handled by registering the -application as a 'dma_client' and implementing a 'dma_event_callback' to -apply a filter to the available channels in the system. Before showing -how to implement a custom dma_event callback some background of -dmaengine's client support is required. - -The following routines in dmaengine support multiple clients requesting -use of a channel: -- dma_async_client_register(struct dma_client *client) -- dma_async_client_chan_request(struct dma_client *client) - -dma_async_client_register takes a pointer to an initialized dma_client -structure. It expects that the 'event_callback' and 'cap_mask' fields -are already initialized. - -dma_async_client_chan_request triggers dmaengine to notify the client of -all channels that satisfy the capability mask. It is up to the client's -event_callback routine to track how many channels the client needs and -how many it is currently using. The dma_event_callback routine returns a -dma_state_client code to let dmaengine know the status of the -allocation. - -Below is the example of how to extend this functionality for -platform-specific filtering of the available channels beyond the -standard capability mask: - -static enum dma_state_client -my_dma_client_callback(struct dma_client *client, - struct dma_chan *chan, enum dma_state state) -{ - struct dma_device *dma_dev; - struct my_platform_specific_dma *plat_dma_dev; - - dma_dev = chan->device; - plat_dma_dev = container_of(dma_dev, - struct my_platform_specific_dma, - dma_dev); - - if (!plat_dma_dev->platform_specific_capability) - return DMA_DUP; - - . . . -} +4.2 "My application needs exclusive control of hardware channels" +Primarily this requirement arises from cases where a DMA engine driver +is being used to support device-to-memory operations. A channel that is +performing these operations cannot, for many platform specific reasons, +be shared. For these cases the dma_request_channel() interface is +provided. + +The interface is: +struct dma_chan *dma_request_channel(dma_cap_mask_t mask, + dma_filter_fn filter_fn, + void *filter_param); + +Where dma_filter_fn is defined as: +typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); + +When the optional 'filter_fn' parameter is set to NULL +dma_request_channel simply returns the first channel that satisfies the +capability mask. Otherwise, when the mask parameter is insufficient for +specifying the necessary channel, the filter_fn routine can be used to +disposition the available channels in the system. The filter_fn routine +is called once for each free channel in the system. Upon seeing a +suitable channel filter_fn returns DMA_ACK which flags that channel to +be the return value from dma_request_channel. A channel allocated via +this interface is exclusive to the caller, until dma_release_channel() +is called. + +The DMA_PRIVATE capability flag is used to tag dma devices that should +not be used by the general-purpose allocator. It can be set at +initialization time if it is known that a channel will always be +private. Alternatively, it is set when dma_request_channel() finds an +unused "public" channel. + +A couple caveats to note when implementing a driver and consumer: +1/ Once a channel has been privately allocated it will no longer be + considered by the general-purpose allocator even after a call to + dma_release_channel(). +2/ Since capabilities are specified at the device level a dma_device + with multiple channels will either have all channels public, or all + channels private. 5 SOURCE -include/linux/dmaengine.h: core header file for DMA drivers and clients + +include/linux/dmaengine.h: core header file for DMA drivers and api users drivers/dma/dmaengine.c: offload engine channel management routines drivers/dma/: location for offload engine drivers include/linux/async_tx.h: core header file for the async_tx api diff --git a/Documentation/dmaengine.txt b/Documentation/dmaengine.txt new file mode 100644 index 0000000..0c1c2f6 --- /dev/null +++ b/Documentation/dmaengine.txt @@ -0,0 +1 @@ +See Documentation/crypto/async-tx-api.txt -- cgit v0.10.2 From 07f2211e4fbce6990722d78c4f04225da9c0e9cf Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Mon, 5 Jan 2009 17:14:31 -0700 Subject: dmaengine: remove dependency on async_tx async_tx.ko is a consumer of dma channels. A circular dependency arises if modules in drivers/dma rely on common code in async_tx.ko. It prevents either module from being unloaded. Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o where they should have been from the beginning. Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c index dcbf1be..8cfac18 100644 --- a/crypto/async_tx/async_tx.c +++ b/crypto/async_tx/async_tx.c @@ -72,81 +72,6 @@ void async_tx_issue_pending_all(void) } EXPORT_SYMBOL_GPL(async_tx_issue_pending_all); -/* dma_wait_for_async_tx - spin wait for a transcation to complete - * @tx: transaction to wait on - */ -enum dma_status -dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) -{ - enum dma_status status; - struct dma_async_tx_descriptor *iter; - struct dma_async_tx_descriptor *parent; - - if (!tx) - return DMA_SUCCESS; - - /* poll through the dependency chain, return when tx is complete */ - do { - iter = tx; - - /* find the root of the unsubmitted dependency chain */ - do { - parent = iter->parent; - if (!parent) - break; - else - iter = parent; - } while (parent); - - /* there is a small window for ->parent == NULL and - * ->cookie == -EBUSY - */ - while (iter->cookie == -EBUSY) - cpu_relax(); - - status = dma_sync_wait(iter->chan, iter->cookie); - } while (status == DMA_IN_PROGRESS || (iter != tx)); - - return status; -} -EXPORT_SYMBOL_GPL(dma_wait_for_async_tx); - -/* async_tx_run_dependencies - helper routine for dma drivers to process - * (start) dependent operations on their target channel - * @tx: transaction with dependencies - */ -void async_tx_run_dependencies(struct dma_async_tx_descriptor *tx) -{ - struct dma_async_tx_descriptor *dep = tx->next; - struct dma_async_tx_descriptor *dep_next; - struct dma_chan *chan; - - if (!dep) - return; - - chan = dep->chan; - - /* keep submitting up until a channel switch is detected - * in that case we will be called again as a result of - * processing the interrupt from async_tx_channel_switch - */ - for (; dep; dep = dep_next) { - spin_lock_bh(&dep->lock); - dep->parent = NULL; - dep_next = dep->next; - if (dep_next && dep_next->chan == chan) - dep->next = NULL; /* ->next will be submitted */ - else - dep_next = NULL; /* submit current dep and terminate */ - spin_unlock_bh(&dep->lock); - - dep->tx_submit(dep); - } - - chan->device->device_issue_pending(chan); -} -EXPORT_SYMBOL_GPL(async_tx_run_dependencies); - static void free_dma_chan_ref(struct rcu_head *rcu) { diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index 904e575..e34b064 100644 --- a/drivers/dma/Kconfig +++ b/drivers/dma/Kconfig @@ -33,7 +33,6 @@ config INTEL_IOATDMA config INTEL_IOP_ADMA tristate "Intel IOP ADMA support" depends on ARCH_IOP32X || ARCH_IOP33X || ARCH_IOP13XX - select ASYNC_CORE select DMA_ENGINE help Enable support for the Intel(R) IOP Series RAID engines. @@ -59,7 +58,6 @@ config FSL_DMA config MV_XOR bool "Marvell XOR engine support" depends on PLAT_ORION - select ASYNC_CORE select DMA_ENGINE ---help--- Enable support for the Marvell XOR engine. diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 6579965..b900893 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -626,6 +626,90 @@ void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx, } EXPORT_SYMBOL(dma_async_tx_descriptor_init); +/* dma_wait_for_async_tx - spin wait for a transaction to complete + * @tx: in-flight transaction to wait on + * + * This routine assumes that tx was obtained from a call to async_memcpy, + * async_xor, async_memset, etc which ensures that tx is "in-flight" (prepped + * and submitted). Walking the parent chain is only meant to cover for DMA + * drivers that do not implement the DMA_INTERRUPT capability and may race with + * the driver's descriptor cleanup routine. + */ +enum dma_status +dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) +{ + enum dma_status status; + struct dma_async_tx_descriptor *iter; + struct dma_async_tx_descriptor *parent; + + if (!tx) + return DMA_SUCCESS; + + WARN_ONCE(tx->parent, "%s: speculatively walking dependency chain for" + " %s\n", __func__, dev_name(&tx->chan->dev)); + + /* poll through the dependency chain, return when tx is complete */ + do { + iter = tx; + + /* find the root of the unsubmitted dependency chain */ + do { + parent = iter->parent; + if (!parent) + break; + else + iter = parent; + } while (parent); + + /* there is a small window for ->parent == NULL and + * ->cookie == -EBUSY + */ + while (iter->cookie == -EBUSY) + cpu_relax(); + + status = dma_sync_wait(iter->chan, iter->cookie); + } while (status == DMA_IN_PROGRESS || (iter != tx)); + + return status; +} +EXPORT_SYMBOL_GPL(dma_wait_for_async_tx); + +/* dma_run_dependencies - helper routine for dma drivers to process + * (start) dependent operations on their target channel + * @tx: transaction with dependencies + */ +void dma_run_dependencies(struct dma_async_tx_descriptor *tx) +{ + struct dma_async_tx_descriptor *dep = tx->next; + struct dma_async_tx_descriptor *dep_next; + struct dma_chan *chan; + + if (!dep) + return; + + chan = dep->chan; + + /* keep submitting up until a channel switch is detected + * in that case we will be called again as a result of + * processing the interrupt from async_tx_channel_switch + */ + for (; dep; dep = dep_next) { + spin_lock_bh(&dep->lock); + dep->parent = NULL; + dep_next = dep->next; + if (dep_next && dep_next->chan == chan) + dep->next = NULL; /* ->next will be submitted */ + else + dep_next = NULL; /* submit current dep and terminate */ + spin_unlock_bh(&dep->lock); + + dep->tx_submit(dep); + } + + chan->device->device_issue_pending(chan); +} +EXPORT_SYMBOL_GPL(dma_run_dependencies); + static int __init dma_bus_init(void) { mutex_init(&dma_list_mutex); diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index 6be3172..be9ea9f 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -24,7 +24,6 @@ #include #include -#include #include #include #include @@ -116,7 +115,7 @@ iop_adma_run_tx_complete_actions(struct iop_adma_desc_slot *desc, } /* run dependent operations */ - async_tx_run_dependencies(&desc->async_tx); + dma_run_dependencies(&desc->async_tx); return cookie; } diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index bcda174..3f46df3 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -18,7 +18,6 @@ #include #include -#include #include #include #include @@ -340,7 +339,7 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc, } /* run dependent operations */ - async_tx_run_dependencies(&desc->async_tx); + dma_run_dependencies(&desc->async_tx); return cookie; } diff --git a/include/linux/async_tx.h b/include/linux/async_tx.h index 0f50d4c..1c81677 100644 --- a/include/linux/async_tx.h +++ b/include/linux/async_tx.h @@ -60,8 +60,6 @@ enum async_tx_flags { #ifdef CONFIG_DMA_ENGINE void async_tx_issue_pending_all(void); -enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx); -void async_tx_run_dependencies(struct dma_async_tx_descriptor *tx); #ifdef CONFIG_ARCH_HAS_ASYNC_TX_FIND_CHANNEL #include #else @@ -77,19 +75,6 @@ static inline void async_tx_issue_pending_all(void) do { } while (0); } -static inline enum dma_status -dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) -{ - return DMA_SUCCESS; -} - -static inline void -async_tx_run_dependencies(struct dma_async_tx_descriptor *tx, - struct dma_chan *host_chan) -{ - do { } while (0); -} - static inline struct dma_chan * async_tx_find_channel(struct dma_async_tx_descriptor *depend_tx, enum dma_transaction_type tx_type, struct page **dst, int dst_count, diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index adb0b08..e4ec7e7 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -475,11 +475,20 @@ static inline enum dma_status dma_async_is_complete(dma_cookie_t cookie, } enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie); +#ifdef CONFIG_DMA_ENGINE +enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx); +#else +static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) +{ + return DMA_SUCCESS; +} +#endif /* --- DMA device --- */ int dma_async_device_register(struct dma_device *device); void dma_async_device_unregister(struct dma_device *device); +void dma_run_dependencies(struct dma_async_tx_descriptor *tx); /* --- Helper iov-locking functions --- */ -- cgit v0.10.2 From 6f49a57aa5a0c6d4e4e27c85f7af6c83325a12d1 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:14 -0700 Subject: dmaengine: up-level reference counting to the module level Simply, if a client wants any dmaengine channel then prevent all dmaengine modules from being removed. Once the clients are done re-enable module removal. Why?, beyond reducing complication: 1/ Tracking reference counts per-transaction in an efficient manner, as is currently done, requires a complicated scheme to avoid cache-line bouncing effects. 2/ Per-transaction ref-counting gives the false impression that a dma-driver can be gracefully removed ahead of its user (net, md, or dma-slave) 3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but if such an engine were built one day we still would not need to notify clients of remove events. The driver can simply return NULL to a ->prep() request, something that is much easier for a client to handle. Reviewed-by: Andrew Morton Acked-by: Maciej Sosnowski Signed-off-by: Dan Williams diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c index 8cfac18..43fe4cb 100644 --- a/crypto/async_tx/async_tx.c +++ b/crypto/async_tx/async_tx.c @@ -198,8 +198,6 @@ dma_channel_add_remove(struct dma_client *client, /* add the channel to the generic management list */ master_ref = kmalloc(sizeof(*master_ref), GFP_KERNEL); if (master_ref) { - /* keep a reference until async_tx is unloaded */ - dma_chan_get(chan); init_dma_chan_ref(master_ref, chan); spin_lock_irqsave(&async_tx_lock, flags); list_add_tail_rcu(&master_ref->node, @@ -221,8 +219,6 @@ dma_channel_add_remove(struct dma_client *client, spin_lock_irqsave(&async_tx_lock, flags); list_for_each_entry(ref, &async_tx_master_list, node) if (ref->chan == chan) { - /* permit backing devices to go away */ - dma_chan_put(ref->chan); list_del_rcu(&ref->node); call_rcu(&ref->rcu, free_dma_chan_ref); found = 1; diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index b900893..d4d9259 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -74,6 +74,7 @@ static DEFINE_MUTEX(dma_list_mutex); static LIST_HEAD(dma_device_list); static LIST_HEAD(dma_client_list); +static long dmaengine_ref_count; /* --- sysfs implementation --- */ @@ -105,19 +106,8 @@ static ssize_t show_bytes_transferred(struct device *dev, struct device_attribut static ssize_t show_in_use(struct device *dev, struct device_attribute *attr, char *buf) { struct dma_chan *chan = to_dma_chan(dev); - int in_use = 0; - - if (unlikely(chan->slow_ref) && - atomic_read(&chan->refcount.refcount) > 1) - in_use = 1; - else { - if (local_read(&(per_cpu_ptr(chan->local, - get_cpu())->refcount)) > 0) - in_use = 1; - put_cpu(); - } - return sprintf(buf, "%d\n", in_use); + return sprintf(buf, "%d\n", chan->client_count); } static struct device_attribute dma_attrs[] = { @@ -155,6 +145,78 @@ __dma_chan_satisfies_mask(struct dma_chan *chan, dma_cap_mask_t *want) return bitmap_equal(want->bits, has.bits, DMA_TX_TYPE_END); } +static struct module *dma_chan_to_owner(struct dma_chan *chan) +{ + return chan->device->dev->driver->owner; +} + +/** + * balance_ref_count - catch up the channel reference count + * @chan - channel to balance ->client_count versus dmaengine_ref_count + * + * balance_ref_count must be called under dma_list_mutex + */ +static void balance_ref_count(struct dma_chan *chan) +{ + struct module *owner = dma_chan_to_owner(chan); + + while (chan->client_count < dmaengine_ref_count) { + __module_get(owner); + chan->client_count++; + } +} + +/** + * dma_chan_get - try to grab a dma channel's parent driver module + * @chan - channel to grab + * + * Must be called under dma_list_mutex + */ +static int dma_chan_get(struct dma_chan *chan) +{ + int err = -ENODEV; + struct module *owner = dma_chan_to_owner(chan); + + if (chan->client_count) { + __module_get(owner); + err = 0; + } else if (try_module_get(owner)) + err = 0; + + if (err == 0) + chan->client_count++; + + /* allocate upon first client reference */ + if (chan->client_count == 1 && err == 0) { + int desc_cnt = chan->device->device_alloc_chan_resources(chan, NULL); + + if (desc_cnt < 0) { + err = desc_cnt; + chan->client_count = 0; + module_put(owner); + } else + balance_ref_count(chan); + } + + return err; +} + +/** + * dma_chan_put - drop a reference to a dma channel's parent driver module + * @chan - channel to release + * + * Must be called under dma_list_mutex + */ +static void dma_chan_put(struct dma_chan *chan) +{ + if (!chan->client_count) + return; /* this channel failed alloc_chan_resources */ + chan->client_count--; + module_put(dma_chan_to_owner(chan)); + if (chan->client_count == 0) + chan->device->device_free_chan_resources(chan); +} + /** * dma_client_chan_alloc - try to allocate channels to a client * @client: &dma_client @@ -165,7 +227,6 @@ static void dma_client_chan_alloc(struct dma_client *client) { struct dma_device *device; struct dma_chan *chan; - int desc; /* allocated descriptor count */ enum dma_state_client ack; /* Find a channel */ @@ -178,23 +239,16 @@ static void dma_client_chan_alloc(struct dma_client *client) list_for_each_entry(chan, &device->channels, device_node) { if (!dma_chan_satisfies_mask(chan, client->cap_mask)) continue; + if (!chan->client_count) + continue; + ack = client->event_callback(client, chan, + DMA_RESOURCE_AVAILABLE); - desc = chan->device->device_alloc_chan_resources( - chan, client); - if (desc >= 0) { - ack = client->event_callback(client, - chan, - DMA_RESOURCE_AVAILABLE); - - /* we are done once this client rejects - * an available resource - */ - if (ack == DMA_ACK) { - dma_chan_get(chan); - chan->client_count++; - } else if (ack == DMA_NAK) - return; - } + /* we are done once this client rejects + * an available resource + */ + if (ack == DMA_NAK) + return; } } } @@ -224,7 +278,6 @@ EXPORT_SYMBOL(dma_sync_wait); void dma_chan_cleanup(struct kref *kref) { struct dma_chan *chan = container_of(kref, struct dma_chan, refcount); - chan->device->device_free_chan_resources(chan); kref_put(&chan->device->refcount, dma_async_device_cleanup); } EXPORT_SYMBOL(dma_chan_cleanup); @@ -232,18 +285,12 @@ EXPORT_SYMBOL(dma_chan_cleanup); static void dma_chan_free_rcu(struct rcu_head *rcu) { struct dma_chan *chan = container_of(rcu, struct dma_chan, rcu); - int bias = 0x7FFFFFFF; - int i; - for_each_possible_cpu(i) - bias -= local_read(&per_cpu_ptr(chan->local, i)->refcount); - atomic_sub(bias, &chan->refcount.refcount); + kref_put(&chan->refcount, dma_chan_cleanup); } static void dma_chan_release(struct dma_chan *chan) { - atomic_add(0x7FFFFFFF, &chan->refcount.refcount); - chan->slow_ref = 1; call_rcu(&chan->rcu, dma_chan_free_rcu); } @@ -263,43 +310,36 @@ static void dma_clients_notify_available(void) } /** - * dma_chans_notify_available - tell the clients that a channel is going away - * @chan: channel on its way out - */ -static void dma_clients_notify_removed(struct dma_chan *chan) -{ - struct dma_client *client; - enum dma_state_client ack; - - mutex_lock(&dma_list_mutex); - - list_for_each_entry(client, &dma_client_list, global_node) { - ack = client->event_callback(client, chan, - DMA_RESOURCE_REMOVED); - - /* client was holding resources for this channel so - * free it - */ - if (ack == DMA_ACK) { - dma_chan_put(chan); - chan->client_count--; - } - } - - mutex_unlock(&dma_list_mutex); -} - -/** * dma_async_client_register - register a &dma_client * @client: ptr to a client structure with valid 'event_callback' and 'cap_mask' */ void dma_async_client_register(struct dma_client *client) { + struct dma_device *device, *_d; + struct dma_chan *chan; + int err; + /* validate client data */ BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) && !client->slave); mutex_lock(&dma_list_mutex); + dmaengine_ref_count++; + + /* try to grab channels */ + list_for_each_entry_safe(device, _d, &dma_device_list, global_node) + list_for_each_entry(chan, &device->channels, device_node) { + err = dma_chan_get(chan); + if (err == -ENODEV) { + /* module removed before we could use it */ + list_del_init(&device->global_node); + break; + } else if (err) + pr_err("dmaengine: failed to get %s: (%d)\n", + dev_name(&chan->dev), err); + } + + list_add_tail(&client->global_node, &dma_client_list); mutex_unlock(&dma_list_mutex); } @@ -315,23 +355,17 @@ void dma_async_client_unregister(struct dma_client *client) { struct dma_device *device; struct dma_chan *chan; - enum dma_state_client ack; if (!client) return; mutex_lock(&dma_list_mutex); - /* free all channels the client is holding */ + dmaengine_ref_count--; + BUG_ON(dmaengine_ref_count < 0); + /* drop channel references */ list_for_each_entry(device, &dma_device_list, global_node) - list_for_each_entry(chan, &device->channels, device_node) { - ack = client->event_callback(client, chan, - DMA_RESOURCE_REMOVED); - - if (ack == DMA_ACK) { - dma_chan_put(chan); - chan->client_count--; - } - } + list_for_each_entry(chan, &device->channels, device_node) + dma_chan_put(chan); list_del(&client->global_node); mutex_unlock(&dma_list_mutex); @@ -423,6 +457,21 @@ int dma_async_device_register(struct dma_device *device) } mutex_lock(&dma_list_mutex); + if (dmaengine_ref_count) + list_for_each_entry(chan, &device->channels, device_node) { + /* if clients are already waiting for channels we need + * to take references on their behalf + */ + if (dma_chan_get(chan) == -ENODEV) { + /* note we can only get here for the first + * channel as the remaining channels are + * guaranteed to get a reference + */ + rc = -ENODEV; + mutex_unlock(&dma_list_mutex); + goto err_out; + } + } list_add_tail(&device->global_node, &dma_device_list); mutex_unlock(&dma_list_mutex); @@ -456,7 +505,7 @@ static void dma_async_device_cleanup(struct kref *kref) } /** - * dma_async_device_unregister - unregisters DMA devices + * dma_async_device_unregister - unregister a DMA device * @device: &dma_device */ void dma_async_device_unregister(struct dma_device *device) @@ -468,7 +517,9 @@ void dma_async_device_unregister(struct dma_device *device) mutex_unlock(&dma_list_mutex); list_for_each_entry(chan, &device->channels, device_node) { - dma_clients_notify_removed(chan); + WARN_ONCE(chan->client_count, + "%s called while %d clients hold a reference\n", + __func__, chan->client_count); device_unregister(&chan->dev); dma_chan_release(chan); } diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c index ed9636b..db40508 100644 --- a/drivers/dma/dmatest.c +++ b/drivers/dma/dmatest.c @@ -215,7 +215,6 @@ static int dmatest_func(void *data) smp_rmb(); chan = thread->chan; - dma_chan_get(chan); while (!kthread_should_stop()) { total_tests++; @@ -293,7 +292,6 @@ static int dmatest_func(void *data) } ret = 0; - dma_chan_put(chan); kfree(thread->dstbuf); err_dstbuf: kfree(thread->srcbuf); diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c index 0778d99..377dafa 100644 --- a/drivers/dma/dw_dmac.c +++ b/drivers/dma/dw_dmac.c @@ -773,7 +773,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan, dev_vdbg(&chan->dev, "alloc_chan_resources\n"); /* Channels doing slave DMA can only handle one client. */ - if (dwc->dws || client->slave) { + if (dwc->dws || (client && client->slave)) { if (chan->client_count) return -EBUSY; } diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c index 7a3f243..6c11f4d 100644 --- a/drivers/mmc/host/atmel-mci.c +++ b/drivers/mmc/host/atmel-mci.c @@ -593,10 +593,8 @@ atmci_submit_data_dma(struct atmel_mci *host, struct mmc_data *data) /* If we don't have a channel, we can't do DMA */ chan = host->dma.chan; - if (chan) { - dma_chan_get(chan); + if (chan) host->data_chan = chan; - } if (!chan) return -ENODEV; diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index e4ec7e7..d18d37d 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -165,7 +165,6 @@ struct dma_slave { */ struct dma_chan_percpu { - local_t refcount; /* stats */ unsigned long memcpy_count; unsigned long bytes_transferred; @@ -205,26 +204,6 @@ struct dma_chan { void dma_chan_cleanup(struct kref *kref); -static inline void dma_chan_get(struct dma_chan *chan) -{ - if (unlikely(chan->slow_ref)) - kref_get(&chan->refcount); - else { - local_inc(&(per_cpu_ptr(chan->local, get_cpu())->refcount)); - put_cpu(); - } -} - -static inline void dma_chan_put(struct dma_chan *chan) -{ - if (unlikely(chan->slow_ref)) - kref_put(&chan->refcount, dma_chan_cleanup); - else { - local_dec(&(per_cpu_ptr(chan->local, get_cpu())->refcount)); - put_cpu(); - } -} - /* * typedef dma_event_callback - function pointer to a DMA event callback * For each channel added to the system this routine is called for each client. diff --git a/include/net/netdma.h b/include/net/netdma.h index f28c6e0..cbe2737 100644 --- a/include/net/netdma.h +++ b/include/net/netdma.h @@ -27,11 +27,11 @@ static inline struct dma_chan *get_softnet_dma(void) { struct dma_chan *chan; + rcu_read_lock(); chan = rcu_dereference(__get_cpu_var(softnet_data).net_dma); - if (chan) - dma_chan_get(chan); rcu_read_unlock(); + return chan; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f28acf1..75e0e0a 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1632,7 +1632,6 @@ skip_copy: /* Safe to free early-copied skbs now */ __skb_queue_purge(&sk->sk_async_wait_queue); - dma_chan_put(tp->ucopy.dma_chan); tp->ucopy.dma_chan = NULL; } if (tp->ucopy.pinned_list) { -- cgit v0.10.2 From bec085134e446577a983f17f57d642a88d1af53b Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:14 -0700 Subject: dmaengine: centralize channel allocation, introduce dma_find_channel Allowing multiple clients to each define their own channel allocation scheme quickly leads to a pathological situation. For memory-to-memory offload all clients can share a central allocator. This simply moves the existing async_tx allocator to dmaengine with minimal fixups: * async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan * async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance * split out common code from async_tx.c:__async_tx_find_channel --> dma_find_channel Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c index 43fe4cb..b88bb1f 100644 --- a/crypto/async_tx/async_tx.c +++ b/crypto/async_tx/async_tx.c @@ -38,25 +38,10 @@ static struct dma_client async_tx_dma = { }; /** - * dma_cap_mask_all - enable iteration over all operation types - */ -static dma_cap_mask_t dma_cap_mask_all; - -/** - * chan_ref_percpu - tracks channel allocations per core/opertion - */ -struct chan_ref_percpu { - struct dma_chan_ref *ref; -}; - -static int channel_table_initialized; -static struct chan_ref_percpu *channel_table[DMA_TX_TYPE_END]; - -/** * async_tx_lock - protect modification of async_tx_master_list and serialize * rebalance operations */ -static spinlock_t async_tx_lock; +static DEFINE_SPINLOCK(async_tx_lock); static LIST_HEAD(async_tx_master_list); @@ -89,85 +74,6 @@ init_dma_chan_ref(struct dma_chan_ref *ref, struct dma_chan *chan) atomic_set(&ref->count, 0); } -/** - * get_chan_ref_by_cap - returns the nth channel of the given capability - * defaults to returning the channel with the desired capability and the - * lowest reference count if the index can not be satisfied - * @cap: capability to match - * @index: nth channel desired, passing -1 has the effect of forcing the - * default return value - */ -static struct dma_chan_ref * -get_chan_ref_by_cap(enum dma_transaction_type cap, int index) -{ - struct dma_chan_ref *ret_ref = NULL, *min_ref = NULL, *ref; - - rcu_read_lock(); - list_for_each_entry_rcu(ref, &async_tx_master_list, node) - if (dma_has_cap(cap, ref->chan->device->cap_mask)) { - if (!min_ref) - min_ref = ref; - else if (atomic_read(&ref->count) < - atomic_read(&min_ref->count)) - min_ref = ref; - - if (index-- == 0) { - ret_ref = ref; - break; - } - } - rcu_read_unlock(); - - if (!ret_ref) - ret_ref = min_ref; - - if (ret_ref) - atomic_inc(&ret_ref->count); - - return ret_ref; -} - -/** - * async_tx_rebalance - redistribute the available channels, optimize - * for cpu isolation in the SMP case, and opertaion isolation in the - * uniprocessor case - */ -static void async_tx_rebalance(void) -{ - int cpu, cap, cpu_idx = 0; - unsigned long flags; - - if (!channel_table_initialized) - return; - - spin_lock_irqsave(&async_tx_lock, flags); - - /* undo the last distribution */ - for_each_dma_cap_mask(cap, dma_cap_mask_all) - for_each_possible_cpu(cpu) { - struct dma_chan_ref *ref = - per_cpu_ptr(channel_table[cap], cpu)->ref; - if (ref) { - atomic_set(&ref->count, 0); - per_cpu_ptr(channel_table[cap], cpu)->ref = - NULL; - } - } - - for_each_dma_cap_mask(cap, dma_cap_mask_all) - for_each_online_cpu(cpu) { - struct dma_chan_ref *new; - if (NR_CPUS > 1) - new = get_chan_ref_by_cap(cap, cpu_idx++); - else - new = get_chan_ref_by_cap(cap, -1); - - per_cpu_ptr(channel_table[cap], cpu)->ref = new; - } - - spin_unlock_irqrestore(&async_tx_lock, flags); -} - static enum dma_state_client dma_channel_add_remove(struct dma_client *client, struct dma_chan *chan, enum dma_state state) @@ -211,8 +117,6 @@ dma_channel_add_remove(struct dma_client *client, " (-ENOMEM)\n"); return 0; } - - async_tx_rebalance(); break; case DMA_RESOURCE_REMOVED: found = 0; @@ -233,8 +137,6 @@ dma_channel_add_remove(struct dma_client *client, ack = DMA_ACK; else break; - - async_tx_rebalance(); break; case DMA_RESOURCE_SUSPEND: case DMA_RESOURCE_RESUME: @@ -248,51 +150,18 @@ dma_channel_add_remove(struct dma_client *client, return ack; } -static int __init -async_tx_init(void) +static int __init async_tx_init(void) { - enum dma_transaction_type cap; - - spin_lock_init(&async_tx_lock); - bitmap_fill(dma_cap_mask_all.bits, DMA_TX_TYPE_END); - - /* an interrupt will never be an explicit operation type. - * clearing this bit prevents allocation to a slot in 'channel_table' - */ - clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits); - - for_each_dma_cap_mask(cap, dma_cap_mask_all) { - channel_table[cap] = alloc_percpu(struct chan_ref_percpu); - if (!channel_table[cap]) - goto err; - } - - channel_table_initialized = 1; dma_async_client_register(&async_tx_dma); dma_async_client_chan_request(&async_tx_dma); printk(KERN_INFO "async_tx: api initialized (async)\n"); return 0; -err: - printk(KERN_ERR "async_tx: initialization failure\n"); - - while (--cap >= 0) - free_percpu(channel_table[cap]); - - return 1; } static void __exit async_tx_exit(void) { - enum dma_transaction_type cap; - - channel_table_initialized = 0; - - for_each_dma_cap_mask(cap, dma_cap_mask_all) - if (channel_table[cap]) - free_percpu(channel_table[cap]); - dma_async_client_unregister(&async_tx_dma); } @@ -308,16 +177,9 @@ __async_tx_find_channel(struct dma_async_tx_descriptor *depend_tx, { /* see if we can keep the chain on one channel */ if (depend_tx && - dma_has_cap(tx_type, depend_tx->chan->device->cap_mask)) + dma_has_cap(tx_type, depend_tx->chan->device->cap_mask)) return depend_tx->chan; - else if (likely(channel_table_initialized)) { - struct dma_chan_ref *ref; - int cpu = get_cpu(); - ref = per_cpu_ptr(channel_table[tx_type], cpu)->ref; - put_cpu(); - return ref ? ref->chan : NULL; - } else - return NULL; + return dma_find_channel(tx_type); } EXPORT_SYMBOL_GPL(__async_tx_find_channel); #else diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index d4d9259..87a8cd4 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -295,6 +295,164 @@ static void dma_chan_release(struct dma_chan *chan) } /** + * dma_cap_mask_all - enable iteration over all operation types + */ +static dma_cap_mask_t dma_cap_mask_all; + +/** + * dma_chan_tbl_ent - tracks channel allocations per core/operation + * @chan - associated channel for this entry + */ +struct dma_chan_tbl_ent { + struct dma_chan *chan; +}; + +/** + * channel_table - percpu lookup table for memory-to-memory offload providers + */ +static struct dma_chan_tbl_ent *channel_table[DMA_TX_TYPE_END]; + +static int __init dma_channel_table_init(void) +{ + enum dma_transaction_type cap; + int err = 0; + + bitmap_fill(dma_cap_mask_all.bits, DMA_TX_TYPE_END); + + /* 'interrupt' and 'slave' are channel capabilities, but are not + * associated with an operation so they do not need an entry in the + * channel_table + */ + clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits); + clear_bit(DMA_SLAVE, dma_cap_mask_all.bits); + + for_each_dma_cap_mask(cap, dma_cap_mask_all) { + channel_table[cap] = alloc_percpu(struct dma_chan_tbl_ent); + if (!channel_table[cap]) { + err = -ENOMEM; + break; + } + } + + if (err) { + pr_err("dmaengine: initialization failure\n"); + for_each_dma_cap_mask(cap, dma_cap_mask_all) + if (channel_table[cap]) + free_percpu(channel_table[cap]); + } + + return err; +} +subsys_initcall(dma_channel_table_init); + +/** + * dma_find_channel - find a channel to carry out the operation + * @tx_type: transaction type + */ +struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type) +{ + struct dma_chan *chan; + int cpu; + + WARN_ONCE(dmaengine_ref_count == 0, + "client called %s without a reference", __func__); + + cpu = get_cpu(); + chan = per_cpu_ptr(channel_table[tx_type], cpu)->chan; + put_cpu(); + + return chan; +} +EXPORT_SYMBOL(dma_find_channel); + +/** + * nth_chan - returns the nth channel of the given capability + * @cap: capability to match + * @n: nth channel desired + * + * Defaults to returning the channel with the desired capability and the + * lowest reference count when 'n' cannot be satisfied. Must be called + * under dma_list_mutex. + */ +static struct dma_chan *nth_chan(enum dma_transaction_type cap, int n) +{ + struct dma_device *device; + struct dma_chan *chan; + struct dma_chan *ret = NULL; + struct dma_chan *min = NULL; + + list_for_each_entry(device, &dma_device_list, global_node) { + if (!dma_has_cap(cap, device->cap_mask)) + continue; + list_for_each_entry(chan, &device->channels, device_node) { + if (!chan->client_count) + continue; + if (!min) + min = chan; + else if (chan->table_count < min->table_count) + min = chan; + + if (n-- == 0) { + ret = chan; + break; /* done */ + } + } + if (ret) + break; /* done */ + } + + if (!ret) + ret = min; + + if (ret) + ret->table_count++; + + return ret; +} + +/** + * dma_channel_rebalance - redistribute the available channels + * + * Optimize for cpu isolation (each cpu gets a dedicated channel for an + * operation type) in the SMP case, and operation isolation (avoid + * multi-tasking channels) in the non-SMP case. Must be called under + * dma_list_mutex. + */ +static void dma_channel_rebalance(void) +{ + struct dma_chan *chan; + struct dma_device *device; + int cpu; + int cap; + int n; + + /* undo the last distribution */ + for_each_dma_cap_mask(cap, dma_cap_mask_all) + for_each_possible_cpu(cpu) + per_cpu_ptr(channel_table[cap], cpu)->chan = NULL; + + list_for_each_entry(device, &dma_device_list, global_node) + list_for_each_entry(chan, &device->channels, device_node) + chan->table_count = 0; + + /* don't populate the channel_table if no clients are available */ + if (!dmaengine_ref_count) + return; + + /* redistribute available channels */ + n = 0; + for_each_dma_cap_mask(cap, dma_cap_mask_all) + for_each_online_cpu(cpu) { + if (num_possible_cpus() > 1) + chan = nth_chan(cap, n++); + else + chan = nth_chan(cap, -1); + + per_cpu_ptr(channel_table[cap], cpu)->chan = chan; + } +} + +/** * dma_chans_notify_available - broadcast available channels to the clients */ static void dma_clients_notify_available(void) @@ -339,7 +497,12 @@ void dma_async_client_register(struct dma_client *client) dev_name(&chan->dev), err); } - + /* if this is the first reference and there were channels + * waiting we need to rebalance to get those channels + * incorporated into the channel table + */ + if (dmaengine_ref_count == 1) + dma_channel_rebalance(); list_add_tail(&client->global_node, &dma_client_list); mutex_unlock(&dma_list_mutex); } @@ -473,6 +636,7 @@ int dma_async_device_register(struct dma_device *device) } } list_add_tail(&device->global_node, &dma_device_list); + dma_channel_rebalance(); mutex_unlock(&dma_list_mutex); dma_clients_notify_available(); @@ -514,6 +678,7 @@ void dma_async_device_unregister(struct dma_device *device) mutex_lock(&dma_list_mutex); list_del(&device->global_node); + dma_channel_rebalance(); mutex_unlock(&dma_list_mutex); list_for_each_entry(chan, &device->channels, device_node) { @@ -768,3 +933,4 @@ static int __init dma_bus_init(void) } subsys_initcall(dma_bus_init); + diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index d18d37d..b466f02 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -182,6 +182,7 @@ struct dma_chan_percpu { * @device_node: used to add this to the device chan list * @local: per-cpu pointer to a struct dma_chan_percpu * @client-count: how many clients are using this channel + * @table_count: number of appearances in the mem-to-mem allocation table */ struct dma_chan { struct dma_device *device; @@ -198,6 +199,7 @@ struct dma_chan { struct list_head device_node; struct dma_chan_percpu *local; int client_count; + int table_count; }; #define to_dma_chan(p) container_of(p, struct dma_chan, dev) @@ -468,6 +470,7 @@ static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descript int dma_async_device_register(struct dma_device *device); void dma_async_device_unregister(struct dma_device *device); void dma_run_dependencies(struct dma_async_tx_descriptor *tx); +struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type); /* --- Helper iov-locking functions --- */ -- cgit v0.10.2 From 2ba05622b8b143b0c95968ba59bddfbd6d2f2559 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:14 -0700 Subject: dmaengine: provide a common 'issue_pending_all' implementation async_tx and net_dma each have open-coded versions of issue_pending_all, so provide a common routine in dmaengine. The implementation needs to walk the global device list, so implement rcu to allow dma_issue_pending_all to run lockless. Clients protect themselves from channel removal events by holding a dmaengine reference. Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c index b88bb1f..2cdf7a0 100644 --- a/crypto/async_tx/async_tx.c +++ b/crypto/async_tx/async_tx.c @@ -45,18 +45,6 @@ static DEFINE_SPINLOCK(async_tx_lock); static LIST_HEAD(async_tx_master_list); -/* async_tx_issue_pending_all - start all transactions on all channels */ -void async_tx_issue_pending_all(void) -{ - struct dma_chan_ref *ref; - - rcu_read_lock(); - list_for_each_entry_rcu(ref, &async_tx_master_list, node) - ref->chan->device->device_issue_pending(ref->chan); - rcu_read_unlock(); -} -EXPORT_SYMBOL_GPL(async_tx_issue_pending_all); - static void free_dma_chan_ref(struct rcu_head *rcu) { diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 87a8cd4..418eca2 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -70,6 +70,7 @@ #include #include #include +#include static DEFINE_MUTEX(dma_list_mutex); static LIST_HEAD(dma_device_list); @@ -366,6 +367,26 @@ struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type) EXPORT_SYMBOL(dma_find_channel); /** + * dma_issue_pending_all - flush all pending operations across all channels + */ +void dma_issue_pending_all(void) +{ + struct dma_device *device; + struct dma_chan *chan; + + WARN_ONCE(dmaengine_ref_count == 0, + "client called %s without a reference", __func__); + + rcu_read_lock(); + list_for_each_entry_rcu(device, &dma_device_list, global_node) + list_for_each_entry(chan, &device->channels, device_node) + if (chan->client_count) + device->device_issue_pending(chan); + rcu_read_unlock(); +} +EXPORT_SYMBOL(dma_issue_pending_all); + +/** * nth_chan - returns the nth channel of the given capability * @cap: capability to match * @n: nth channel desired @@ -490,7 +511,7 @@ void dma_async_client_register(struct dma_client *client) err = dma_chan_get(chan); if (err == -ENODEV) { /* module removed before we could use it */ - list_del_init(&device->global_node); + list_del_rcu(&device->global_node); break; } else if (err) pr_err("dmaengine: failed to get %s: (%d)\n", @@ -635,7 +656,7 @@ int dma_async_device_register(struct dma_device *device) goto err_out; } } - list_add_tail(&device->global_node, &dma_device_list); + list_add_tail_rcu(&device->global_node, &dma_device_list); dma_channel_rebalance(); mutex_unlock(&dma_list_mutex); @@ -677,7 +698,7 @@ void dma_async_device_unregister(struct dma_device *device) struct dma_chan *chan; mutex_lock(&dma_list_mutex); - list_del(&device->global_node); + list_del_rcu(&device->global_node); dma_channel_rebalance(); mutex_unlock(&dma_list_mutex); diff --git a/include/linux/async_tx.h b/include/linux/async_tx.h index 1c81677..45f6297 100644 --- a/include/linux/async_tx.h +++ b/include/linux/async_tx.h @@ -59,7 +59,7 @@ enum async_tx_flags { }; #ifdef CONFIG_DMA_ENGINE -void async_tx_issue_pending_all(void); +#define async_tx_issue_pending_all dma_issue_pending_all #ifdef CONFIG_ARCH_HAS_ASYNC_TX_FIND_CHANNEL #include #else diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index b466f02..57a43ad 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -471,6 +471,7 @@ int dma_async_device_register(struct dma_device *device); void dma_async_device_unregister(struct dma_device *device); void dma_run_dependencies(struct dma_async_tx_descriptor *tx); struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type); +void dma_issue_pending_all(void); /* --- Helper iov-locking functions --- */ diff --git a/net/core/dev.c b/net/core/dev.c index 09c66a4..e40b0d5 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2635,14 +2635,7 @@ out: * There may not be any more sk_buffs coming right now, so push * any pending DMA copies to hardware */ - if (!cpus_empty(net_dma.channel_mask)) { - int chan_idx; - for_each_cpu_mask_nr(chan_idx, net_dma.channel_mask) { - struct dma_chan *chan = net_dma.channels[chan_idx]; - if (chan) - dma_async_memcpy_issue_pending(chan); - } - } + dma_issue_pending_all(); #endif return; -- cgit v0.10.2 From f67b45999205164958de4ec0658d51fa4bee066d Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:15 -0700 Subject: net_dma: convert to dma_find_channel Use the general-purpose channel allocation provided by dmaengine. Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 41e1224..bac2c45 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1113,9 +1113,6 @@ struct softnet_data struct sk_buff *completion_queue; struct napi_struct backlog; -#ifdef CONFIG_NET_DMA - struct dma_chan *net_dma; -#endif }; DECLARE_PER_CPU(struct softnet_data,softnet_data); diff --git a/include/net/netdma.h b/include/net/netdma.h index cbe2737..8ba8ce2 100644 --- a/include/net/netdma.h +++ b/include/net/netdma.h @@ -24,17 +24,6 @@ #include #include -static inline struct dma_chan *get_softnet_dma(void) -{ - struct dma_chan *chan; - - rcu_read_lock(); - chan = rcu_dereference(__get_cpu_var(softnet_data).net_dma); - rcu_read_unlock(); - - return chan; -} - int dma_skb_copy_datagram_iovec(struct dma_chan* chan, struct sk_buff *skb, int offset, struct iovec *to, size_t len, struct dma_pinned_list *pinned_list); diff --git a/net/core/dev.c b/net/core/dev.c index e40b0d5..bbb07db 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4828,44 +4828,6 @@ static int dev_cpu_callback(struct notifier_block *nfb, #ifdef CONFIG_NET_DMA /** - * net_dma_rebalance - try to maintain one DMA channel per CPU - * @net_dma: DMA client and associated data (lock, channels, channel_mask) - * - * This is called when the number of channels allocated to the net_dma client - * changes. The net_dma client tries to have one DMA channel per CPU. - */ - -static void net_dma_rebalance(struct net_dma *net_dma) -{ - unsigned int cpu, i, n, chan_idx; - struct dma_chan *chan; - - if (cpus_empty(net_dma->channel_mask)) { - for_each_online_cpu(cpu) - rcu_assign_pointer(per_cpu(softnet_data, cpu).net_dma, NULL); - return; - } - - i = 0; - cpu = first_cpu(cpu_online_map); - - for_each_cpu_mask_nr(chan_idx, net_dma->channel_mask) { - chan = net_dma->channels[chan_idx]; - - n = ((num_online_cpus() / cpus_weight(net_dma->channel_mask)) - + (i < (num_online_cpus() % - cpus_weight(net_dma->channel_mask)) ? 1 : 0)); - - while(n) { - per_cpu(softnet_data, cpu).net_dma = chan; - cpu = next_cpu(cpu, cpu_online_map); - n--; - } - i++; - } -} - -/** * netdev_dma_event - event callback for the net_dma_client * @client: should always be net_dma_client * @chan: DMA channel for the event @@ -4894,7 +4856,6 @@ netdev_dma_event(struct dma_client *client, struct dma_chan *chan, ack = DMA_ACK; net_dma->channels[pos] = chan; cpu_set(pos, net_dma->channel_mask); - net_dma_rebalance(net_dma); } break; case DMA_RESOURCE_REMOVED: @@ -4909,7 +4870,6 @@ netdev_dma_event(struct dma_client *client, struct dma_chan *chan, ack = DMA_ACK; cpu_clear(pos, net_dma->channel_mask); net_dma->channels[i] = NULL; - net_dma_rebalance(net_dma); } break; default: diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 75e0e0a..9b275ab 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1317,7 +1317,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, if ((available < target) && (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && !sysctl_tcp_low_latency && - __get_cpu_var(softnet_data).net_dma) { + dma_find_channel(DMA_MEMCPY)) { preempt_enable_no_resched(); tp->ucopy.pinned_list = dma_pin_iovec_pages(msg->msg_iov, len); @@ -1527,7 +1527,7 @@ do_prequeue: if (!(flags & MSG_TRUNC)) { #ifdef CONFIG_NET_DMA if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = get_softnet_dma(); + tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); if (tp->ucopy.dma_chan) { tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec( diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 99b7ecb..a6961d7 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5005,7 +5005,7 @@ static int tcp_dma_try_early_copy(struct sock *sk, struct sk_buff *skb, return 0; if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = get_softnet_dma(); + tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); if (tp->ucopy.dma_chan && skb_csum_unnecessary(skb)) { diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 9d839fa..19d7b42 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1594,7 +1594,7 @@ process: #ifdef CONFIG_NET_DMA struct tcp_sock *tp = tcp_sk(sk); if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = get_softnet_dma(); + tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); if (tp->ucopy.dma_chan) ret = tcp_v4_do_rcv(sk, skb); else diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index e8b8337..71cd709 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -1640,7 +1640,7 @@ process: #ifdef CONFIG_NET_DMA struct tcp_sock *tp = tcp_sk(sk); if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list) - tp->ucopy.dma_chan = get_softnet_dma(); + tp->ucopy.dma_chan = dma_find_channel(DMA_MEMCPY); if (tp->ucopy.dma_chan) ret = tcp_v6_do_rcv(sk, skb); else -- cgit v0.10.2 From 59b5ec21446b9239d706ab237fb261d525b75e81 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:15 -0700 Subject: dmaengine: introduce dma_request_channel and private channels This interface is primarily for device-to-memory clients which need to search for dma channels with platform-specific characteristics. The prototype is: struct dma_chan *dma_request_channel(dma_cap_mask_t mask, dma_filter_fn filter_fn, void *filter_param); When the optional 'filter_fn' parameter is set to NULL dma_request_channel simply returns the first channel that satisfies the capability mask. Otherwise, when the mask parameter is insufficient for specifying the necessary channel, the filter_fn routine can be used to disposition the available channels in the system. The filter_fn routine is called once for each free channel in the system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags that channel to be the return value from dma_request_channel. A channel allocated via this interface is exclusive to the caller, until dma_release_channel() is called. To ensure that all channels are not consumed by the general-purpose allocator the DMA_PRIVATE capability is provided to exclude a dma_device from general-purpose (memory-to-memory) consideration. Reviewed-by: Andrew Morton Acked-by: Maciej Sosnowski Signed-off-by: Dan Williams diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 418eca2..7a0594f 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -134,14 +134,14 @@ static struct class dma_devclass = { /* --- client and device registration --- */ -#define dma_chan_satisfies_mask(chan, mask) \ - __dma_chan_satisfies_mask((chan), &(mask)) +#define dma_device_satisfies_mask(device, mask) \ + __dma_device_satisfies_mask((device), &(mask)) static int -__dma_chan_satisfies_mask(struct dma_chan *chan, dma_cap_mask_t *want) +__dma_device_satisfies_mask(struct dma_device *device, dma_cap_mask_t *want) { dma_cap_mask_t has; - bitmap_and(has.bits, want->bits, chan->device->cap_mask.bits, + bitmap_and(has.bits, want->bits, device->cap_mask.bits, DMA_TX_TYPE_END); return bitmap_equal(want->bits, has.bits, DMA_TX_TYPE_END); } @@ -195,7 +195,7 @@ static int dma_chan_get(struct dma_chan *chan) err = desc_cnt; chan->client_count = 0; module_put(owner); - } else + } else if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask)) balance_ref_count(chan); } @@ -232,14 +232,16 @@ static void dma_client_chan_alloc(struct dma_client *client) /* Find a channel */ list_for_each_entry(device, &dma_device_list, global_node) { + if (dma_has_cap(DMA_PRIVATE, device->cap_mask)) + continue; /* Does the client require a specific DMA controller? */ if (client->slave && client->slave->dma_dev && client->slave->dma_dev != device->dev) continue; + if (!dma_device_satisfies_mask(device, client->cap_mask)) + continue; list_for_each_entry(chan, &device->channels, device_node) { - if (!dma_chan_satisfies_mask(chan, client->cap_mask)) - continue; if (!chan->client_count) continue; ack = client->event_callback(client, chan, @@ -320,11 +322,12 @@ static int __init dma_channel_table_init(void) bitmap_fill(dma_cap_mask_all.bits, DMA_TX_TYPE_END); - /* 'interrupt' and 'slave' are channel capabilities, but are not - * associated with an operation so they do not need an entry in the - * channel_table + /* 'interrupt', 'private', and 'slave' are channel capabilities, + * but are not associated with an operation so they do not need + * an entry in the channel_table */ clear_bit(DMA_INTERRUPT, dma_cap_mask_all.bits); + clear_bit(DMA_PRIVATE, dma_cap_mask_all.bits); clear_bit(DMA_SLAVE, dma_cap_mask_all.bits); for_each_dma_cap_mask(cap, dma_cap_mask_all) { @@ -378,10 +381,13 @@ void dma_issue_pending_all(void) "client called %s without a reference", __func__); rcu_read_lock(); - list_for_each_entry_rcu(device, &dma_device_list, global_node) + list_for_each_entry_rcu(device, &dma_device_list, global_node) { + if (dma_has_cap(DMA_PRIVATE, device->cap_mask)) + continue; list_for_each_entry(chan, &device->channels, device_node) if (chan->client_count) device->device_issue_pending(chan); + } rcu_read_unlock(); } EXPORT_SYMBOL(dma_issue_pending_all); @@ -403,7 +409,8 @@ static struct dma_chan *nth_chan(enum dma_transaction_type cap, int n) struct dma_chan *min = NULL; list_for_each_entry(device, &dma_device_list, global_node) { - if (!dma_has_cap(cap, device->cap_mask)) + if (!dma_has_cap(cap, device->cap_mask) || + dma_has_cap(DMA_PRIVATE, device->cap_mask)) continue; list_for_each_entry(chan, &device->channels, device_node) { if (!chan->client_count) @@ -452,9 +459,12 @@ static void dma_channel_rebalance(void) for_each_possible_cpu(cpu) per_cpu_ptr(channel_table[cap], cpu)->chan = NULL; - list_for_each_entry(device, &dma_device_list, global_node) + list_for_each_entry(device, &dma_device_list, global_node) { + if (dma_has_cap(DMA_PRIVATE, device->cap_mask)) + continue; list_for_each_entry(chan, &device->channels, device_node) chan->table_count = 0; + } /* don't populate the channel_table if no clients are available */ if (!dmaengine_ref_count) @@ -473,6 +483,111 @@ static void dma_channel_rebalance(void) } } +static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev) +{ + struct dma_chan *chan; + struct dma_chan *ret = NULL; + + if (!__dma_device_satisfies_mask(dev, mask)) { + pr_debug("%s: wrong capabilities\n", __func__); + return NULL; + } + /* devices with multiple channels need special handling as we need to + * ensure that all channels are either private or public. + */ + if (dev->chancnt > 1 && !dma_has_cap(DMA_PRIVATE, dev->cap_mask)) + list_for_each_entry(chan, &dev->channels, device_node) { + /* some channels are already publicly allocated */ + if (chan->client_count) + return NULL; + } + + list_for_each_entry(chan, &dev->channels, device_node) { + if (chan->client_count) { + pr_debug("%s: %s busy\n", + __func__, dev_name(&chan->dev)); + continue; + } + ret = chan; + break; + } + + return ret; +} + +/** + * dma_request_channel - try to allocate an exclusive channel + * @mask: capabilities that the channel must satisfy + * @fn: optional callback to disposition available channels + * @fn_param: opaque parameter to pass to dma_filter_fn + */ +struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param) +{ + struct dma_device *device, *_d; + struct dma_chan *chan = NULL; + enum dma_state_client ack; + int err; + + /* Find a channel */ + mutex_lock(&dma_list_mutex); + list_for_each_entry_safe(device, _d, &dma_device_list, global_node) { + chan = private_candidate(mask, device); + if (!chan) + continue; + + if (fn) + ack = fn(chan, fn_param); + else + ack = DMA_ACK; + + if (ack == DMA_ACK) { + /* Found a suitable channel, try to grab, prep, and + * return it. We first set DMA_PRIVATE to disable + * balance_ref_count as this channel will not be + * published in the general-purpose allocator + */ + dma_cap_set(DMA_PRIVATE, device->cap_mask); + err = dma_chan_get(chan); + + if (err == -ENODEV) { + pr_debug("%s: %s module removed\n", __func__, + dev_name(&chan->dev)); + list_del_rcu(&device->global_node); + } else if (err) + pr_err("dmaengine: failed to get %s: (%d)\n", + dev_name(&chan->dev), err); + else + break; + } else if (ack == DMA_DUP) { + pr_debug("%s: %s filter said DMA_DUP\n", + __func__, dev_name(&chan->dev)); + } else if (ack == DMA_NAK) { + pr_debug("%s: %s filter said DMA_NAK\n", + __func__, dev_name(&chan->dev)); + break; + } else + WARN_ONCE(1, "filter_fn: unknown response?\n"); + chan = NULL; + } + mutex_unlock(&dma_list_mutex); + + pr_debug("%s: %s (%s)\n", __func__, chan ? "success" : "fail", + chan ? dev_name(&chan->dev) : NULL); + + return chan; +} +EXPORT_SYMBOL_GPL(__dma_request_channel); + +void dma_release_channel(struct dma_chan *chan) +{ + mutex_lock(&dma_list_mutex); + WARN_ONCE(chan->client_count != 1, + "chan reference count %d != 1\n", chan->client_count); + dma_chan_put(chan); + mutex_unlock(&dma_list_mutex); +} +EXPORT_SYMBOL_GPL(dma_release_channel); + /** * dma_chans_notify_available - broadcast available channels to the clients */ @@ -506,7 +621,9 @@ void dma_async_client_register(struct dma_client *client) dmaengine_ref_count++; /* try to grab channels */ - list_for_each_entry_safe(device, _d, &dma_device_list, global_node) + list_for_each_entry_safe(device, _d, &dma_device_list, global_node) { + if (dma_has_cap(DMA_PRIVATE, device->cap_mask)) + continue; list_for_each_entry(chan, &device->channels, device_node) { err = dma_chan_get(chan); if (err == -ENODEV) { @@ -517,6 +634,7 @@ void dma_async_client_register(struct dma_client *client) pr_err("dmaengine: failed to get %s: (%d)\n", dev_name(&chan->dev), err); } + } /* if this is the first reference and there were channels * waiting we need to rebalance to get those channels @@ -547,9 +665,12 @@ void dma_async_client_unregister(struct dma_client *client) dmaengine_ref_count--; BUG_ON(dmaengine_ref_count < 0); /* drop channel references */ - list_for_each_entry(device, &dma_device_list, global_node) + list_for_each_entry(device, &dma_device_list, global_node) { + if (dma_has_cap(DMA_PRIVATE, device->cap_mask)) + continue; list_for_each_entry(chan, &device->channels, device_node) dma_chan_put(chan); + } list_del(&client->global_node); mutex_unlock(&dma_list_mutex); @@ -639,9 +760,11 @@ int dma_async_device_register(struct dma_device *device) chan->slow_ref = 0; INIT_RCU_HEAD(&chan->rcu); } + device->chancnt = chancnt; mutex_lock(&dma_list_mutex); - if (dmaengine_ref_count) + /* take references on public channels */ + if (dmaengine_ref_count && !dma_has_cap(DMA_PRIVATE, device->cap_mask)) list_for_each_entry(chan, &device->channels, device_node) { /* if clients are already waiting for channels we need * to take references on their behalf diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 57a43ad..fe40bc0 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -89,6 +89,7 @@ enum dma_transaction_type { DMA_MEMSET, DMA_MEMCPY_CRC32C, DMA_INTERRUPT, + DMA_PRIVATE, DMA_SLAVE, }; @@ -224,6 +225,18 @@ typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client, struct dma_chan *chan, enum dma_state state); /** + * typedef dma_filter_fn - callback filter for dma_request_channel + * @chan: channel to be reviewed + * @filter_param: opaque parameter passed through dma_request_channel + * + * When this optional parameter is specified in a call to dma_request_channel a + * suitable channel is passed to this routine for further dispositioning before + * being returned. Where 'suitable' indicates a non-busy channel that + * satisfies the given capability mask. + */ +typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); + +/** * struct dma_client - info on the entity making use of DMA services * @event_callback: func ptr to call when something happens * @cap_mask: only return channels that satisfy the requested capabilities @@ -472,6 +485,9 @@ void dma_async_device_unregister(struct dma_device *device); void dma_run_dependencies(struct dma_async_tx_descriptor *tx); struct dma_chan *dma_find_channel(enum dma_transaction_type tx_type); void dma_issue_pending_all(void); +#define dma_request_channel(mask, x, y) __dma_request_channel(&(mask), x, y) +struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param); +void dma_release_channel(struct dma_chan *chan); /* --- Helper iov-locking functions --- */ -- cgit v0.10.2 From 33df8ca068123457db56c316946a3c0e4ef787d6 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:15 -0700 Subject: dmatest: convert to dma_request_channel Replace the client registration infrastructure with a custom loop to poll for channels. Once dma_request_channel returns NULL stop asking for channels. A userspace side effect of this change if that loading the dmatest module before loading a dma driver will result in no channels being found, previously dmatest would get a callback. To facilitate testing in the built-in case dmatest_init is marked as a late_initcall. Another side effect is that channels under test can not be used for any other purpose. Cc: Haavard Skinnemoen Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c index db40508..1d6e48f 100644 --- a/drivers/dma/dmatest.c +++ b/drivers/dma/dmatest.c @@ -35,7 +35,7 @@ MODULE_PARM_DESC(threads_per_chan, static unsigned int max_channels; module_param(max_channels, uint, S_IRUGO); -MODULE_PARM_DESC(nr_channels, +MODULE_PARM_DESC(max_channels, "Maximum number of channels to use (default: all)"); /* @@ -71,7 +71,7 @@ struct dmatest_chan { /* * These are protected by dma_list_mutex since they're only used by - * the DMA client event callback + * the DMA filter function callback */ static LIST_HEAD(dmatest_channels); static unsigned int nr_channels; @@ -317,21 +317,16 @@ static void dmatest_cleanup_channel(struct dmatest_chan *dtc) kfree(dtc); } -static enum dma_state_client dmatest_add_channel(struct dma_chan *chan) +static int dmatest_add_channel(struct dma_chan *chan) { struct dmatest_chan *dtc; struct dmatest_thread *thread; unsigned int i; - /* Have we already been told about this channel? */ - list_for_each_entry(dtc, &dmatest_channels, node) - if (dtc->chan == chan) - return DMA_DUP; - dtc = kmalloc(sizeof(struct dmatest_chan), GFP_KERNEL); if (!dtc) { pr_warning("dmatest: No memory for %s\n", dev_name(&chan->dev)); - return DMA_NAK; + return -ENOMEM; } dtc->chan = chan; @@ -365,81 +360,57 @@ static enum dma_state_client dmatest_add_channel(struct dma_chan *chan) list_add_tail(&dtc->node, &dmatest_channels); nr_channels++; - return DMA_ACK; + return 0; } -static enum dma_state_client dmatest_remove_channel(struct dma_chan *chan) +static enum dma_state_client filter(struct dma_chan *chan, void *param) { - struct dmatest_chan *dtc, *_dtc; - - list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) { - if (dtc->chan == chan) { - list_del(&dtc->node); - dmatest_cleanup_channel(dtc); - pr_debug("dmatest: lost channel %s\n", - dev_name(&chan->dev)); - return DMA_ACK; - } - } - - return DMA_DUP; -} - -/* - * Start testing threads as new channels are assigned to us, and kill - * them when the channels go away. - * - * When we unregister the client, all channels are removed so this - * will also take care of cleaning things up when the module is - * unloaded. - */ -static enum dma_state_client -dmatest_event(struct dma_client *client, struct dma_chan *chan, - enum dma_state state) -{ - enum dma_state_client ack = DMA_NAK; - - switch (state) { - case DMA_RESOURCE_AVAILABLE: - if (!dmatest_match_channel(chan) - || !dmatest_match_device(chan->device)) - ack = DMA_DUP; - else if (max_channels && nr_channels >= max_channels) - ack = DMA_NAK; - else - ack = dmatest_add_channel(chan); - break; - - case DMA_RESOURCE_REMOVED: - ack = dmatest_remove_channel(chan); - break; - - default: - pr_info("dmatest: Unhandled event %u (%s)\n", - state, dev_name(&chan->dev)); - break; - } - - return ack; + if (!dmatest_match_channel(chan) || !dmatest_match_device(chan->device)) + return DMA_DUP; + else + return DMA_ACK; } -static struct dma_client dmatest_client = { - .event_callback = dmatest_event, -}; - static int __init dmatest_init(void) { - dma_cap_set(DMA_MEMCPY, dmatest_client.cap_mask); - dma_async_client_register(&dmatest_client); - dma_async_client_chan_request(&dmatest_client); + dma_cap_mask_t mask; + struct dma_chan *chan; + int err = 0; + + dma_cap_zero(mask); + dma_cap_set(DMA_MEMCPY, mask); + for (;;) { + chan = dma_request_channel(mask, filter, NULL); + if (chan) { + err = dmatest_add_channel(chan); + if (err == 0) + continue; + else { + dma_release_channel(chan); + break; /* add_channel failed, punt */ + } + } else + break; /* no more channels available */ + if (max_channels && nr_channels >= max_channels) + break; /* we have all we need */ + } - return 0; + return err; } -module_init(dmatest_init); +/* when compiled-in wait for drivers to load first */ +late_initcall(dmatest_init); static void __exit dmatest_exit(void) { - dma_async_client_unregister(&dmatest_client); + struct dmatest_chan *dtc, *_dtc; + + list_for_each_entry_safe(dtc, _dtc, &dmatest_channels, node) { + list_del(&dtc->node); + dmatest_cleanup_channel(dtc); + pr_debug("dmatest: dropped channel %s\n", + dev_name(&dtc->chan->dev)); + dma_release_channel(dtc->chan); + } } module_exit(dmatest_exit); diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index fe40bc0..6f2d070 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -400,6 +400,12 @@ __dma_cap_set(enum dma_transaction_type tx_type, dma_cap_mask_t *dstp) set_bit(tx_type, dstp->bits); } +#define dma_cap_zero(mask) __dma_cap_zero(&(mask)) +static inline void __dma_cap_zero(dma_cap_mask_t *dstp) +{ + bitmap_zero(dstp->bits, DMA_TX_TYPE_END); +} + #define dma_has_cap(tx, mask) __dma_has_cap((tx), &(mask)) static inline int __dma_has_cap(enum dma_transaction_type tx_type, dma_cap_mask_t *srcp) -- cgit v0.10.2 From 74465b4ff9ac1da503025c0a0042e023bfa6505c Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:16 -0700 Subject: atmel-mci: convert to dma_request_channel and down-level dma_slave dma_request_channel provides an exclusive channel, so we no longer need to pass slave data through dmaengine. Cc: Haavard Skinnemoen Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/arch/avr32/include/asm/atmel-mci.h b/arch/avr32/include/asm/atmel-mci.h index 59f3fad..e5e54c6 100644 --- a/arch/avr32/include/asm/atmel-mci.h +++ b/arch/avr32/include/asm/atmel-mci.h @@ -3,7 +3,7 @@ #define ATMEL_MCI_MAX_NR_SLOTS 2 -struct dma_slave; +#include /** * struct mci_slot_pdata - board-specific per-slot configuration @@ -28,11 +28,11 @@ struct mci_slot_pdata { /** * struct mci_platform_data - board-specific MMC/SDcard configuration - * @dma_slave: DMA slave interface to use in data transfers, or NULL. + * @dma_slave: DMA slave interface to use in data transfers. * @slot: Per-slot configuration data. */ struct mci_platform_data { - struct dma_slave *dma_slave; + struct dw_dma_slave dma_slave; struct mci_slot_pdata slot[ATMEL_MCI_MAX_NR_SLOTS]; }; diff --git a/arch/avr32/mach-at32ap/at32ap700x.c b/arch/avr32/mach-at32ap/at32ap700x.c index 066252e..414f174 100644 --- a/arch/avr32/mach-at32ap/at32ap700x.c +++ b/arch/avr32/mach-at32ap/at32ap700x.c @@ -1305,7 +1305,7 @@ struct platform_device *__init at32_add_device_mci(unsigned int id, struct mci_platform_data *data) { struct platform_device *pdev; - struct dw_dma_slave *dws; + struct dw_dma_slave *dws = &data->dma_slave; u32 pioa_mask; u32 piob_mask; @@ -1324,22 +1324,13 @@ at32_add_device_mci(unsigned int id, struct mci_platform_data *data) ARRAY_SIZE(atmel_mci0_resource))) goto fail; - if (data->dma_slave) - dws = kmemdup(to_dw_dma_slave(data->dma_slave), - sizeof(struct dw_dma_slave), GFP_KERNEL); - else - dws = kzalloc(sizeof(struct dw_dma_slave), GFP_KERNEL); - - dws->slave.dev = &pdev->dev; - dws->slave.dma_dev = &dw_dmac0_device.dev; - dws->slave.reg_width = DMA_SLAVE_WIDTH_32BIT; + dws->dma_dev = &dw_dmac0_device.dev; + dws->reg_width = DW_DMA_SLAVE_WIDTH_32BIT; dws->cfg_hi = (DWC_CFGH_SRC_PER(0) | DWC_CFGH_DST_PER(1)); dws->cfg_lo &= ~(DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL); - data->dma_slave = &dws->slave; - if (platform_device_add_data(pdev, data, sizeof(struct mci_platform_data))) goto fail; diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 7a0594f..90aca50 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -234,10 +234,6 @@ static void dma_client_chan_alloc(struct dma_client *client) list_for_each_entry(device, &dma_device_list, global_node) { if (dma_has_cap(DMA_PRIVATE, device->cap_mask)) continue; - /* Does the client require a specific DMA controller? */ - if (client->slave && client->slave->dma_dev - && client->slave->dma_dev != device->dev) - continue; if (!dma_device_satisfies_mask(device, client->cap_mask)) continue; @@ -613,10 +609,6 @@ void dma_async_client_register(struct dma_client *client) struct dma_chan *chan; int err; - /* validate client data */ - BUG_ON(dma_has_cap(DMA_SLAVE, client->cap_mask) && - !client->slave); - mutex_lock(&dma_list_mutex); dmaengine_ref_count++; diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c index 377dafa..dbd5080 100644 --- a/drivers/dma/dw_dmac.c +++ b/drivers/dma/dw_dmac.c @@ -567,7 +567,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, if (unlikely(!dws || !sg_len)) return NULL; - reg_width = dws->slave.reg_width; + reg_width = dws->reg_width; prev = first = NULL; sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction); @@ -579,7 +579,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, | DWC_CTLL_DST_FIX | DWC_CTLL_SRC_INC | DWC_CTLL_FC_M2P); - reg = dws->slave.tx_reg; + reg = dws->tx_reg; for_each_sg(sgl, sg, sg_len, i) { struct dw_desc *desc; u32 len; @@ -625,7 +625,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, | DWC_CTLL_SRC_FIX | DWC_CTLL_FC_P2M); - reg = dws->slave.rx_reg; + reg = dws->rx_reg; for_each_sg(sgl, sg, sg_len, i) { struct dw_desc *desc; u32 len; @@ -764,7 +764,6 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan, struct dw_dma_chan *dwc = to_dw_dma_chan(chan); struct dw_dma *dw = to_dw_dma(chan->device); struct dw_desc *desc; - struct dma_slave *slave; struct dw_dma_slave *dws; int i; u32 cfghi; @@ -772,12 +771,6 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan, dev_vdbg(&chan->dev, "alloc_chan_resources\n"); - /* Channels doing slave DMA can only handle one client. */ - if (dwc->dws || (client && client->slave)) { - if (chan->client_count) - return -EBUSY; - } - /* ASSERT: channel is idle */ if (dma_readl(dw, CH_EN) & dwc->mask) { dev_dbg(&chan->dev, "DMA channel not idle?\n"); @@ -789,23 +782,17 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan, cfghi = DWC_CFGH_FIFO_MODE; cfglo = 0; - slave = client->slave; - if (slave) { + dws = dwc->dws; + if (dws) { /* * We need controller-specific data to set up slave * transfers. */ - BUG_ON(!slave->dma_dev || slave->dma_dev != dw->dma.dev); - - dws = container_of(slave, struct dw_dma_slave, slave); + BUG_ON(!dws->dma_dev || dws->dma_dev != dw->dma.dev); - dwc->dws = dws; cfghi = dws->cfg_hi; cfglo = dws->cfg_lo; - } else { - dwc->dws = NULL; } - channel_writel(dwc, CFG_LO, cfglo); channel_writel(dwc, CFG_HI, cfghi); diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c index 6c11f4d..7a34118 100644 --- a/drivers/mmc/host/atmel-mci.c +++ b/drivers/mmc/host/atmel-mci.c @@ -1441,60 +1441,6 @@ static irqreturn_t atmci_detect_interrupt(int irq, void *dev_id) return IRQ_HANDLED; } -#ifdef CONFIG_MMC_ATMELMCI_DMA - -static inline struct atmel_mci * -dma_client_to_atmel_mci(struct dma_client *client) -{ - return container_of(client, struct atmel_mci, dma.client); -} - -static enum dma_state_client atmci_dma_event(struct dma_client *client, - struct dma_chan *chan, enum dma_state state) -{ - struct atmel_mci *host; - enum dma_state_client ret = DMA_NAK; - - host = dma_client_to_atmel_mci(client); - - switch (state) { - case DMA_RESOURCE_AVAILABLE: - spin_lock_bh(&host->lock); - if (!host->dma.chan) { - host->dma.chan = chan; - ret = DMA_ACK; - } - spin_unlock_bh(&host->lock); - - if (ret == DMA_ACK) - dev_info(&host->pdev->dev, - "Using %s for DMA transfers\n", - chan->dev.bus_id); - break; - - case DMA_RESOURCE_REMOVED: - spin_lock_bh(&host->lock); - if (host->dma.chan == chan) { - host->dma.chan = NULL; - ret = DMA_ACK; - } - spin_unlock_bh(&host->lock); - - if (ret == DMA_ACK) - dev_info(&host->pdev->dev, - "Lost %s, falling back to PIO\n", - chan->dev.bus_id); - break; - - default: - break; - } - - - return ret; -} -#endif /* CONFIG_MMC_ATMELMCI_DMA */ - static int __init atmci_init_slot(struct atmel_mci *host, struct mci_slot_pdata *slot_data, unsigned int id, u32 sdc_reg) @@ -1598,6 +1544,18 @@ static void __exit atmci_cleanup_slot(struct atmel_mci_slot *slot, mmc_free_host(slot->mmc); } +#ifdef CONFIG_MMC_ATMELMCI_DMA +static enum dma_state_client filter(struct dma_chan *chan, void *slave) +{ + struct dw_dma_slave *dws = slave; + + if (dws->dma_dev == chan->device->dev) + return DMA_ACK; + else + return DMA_DUP; +} +#endif + static int __init atmci_probe(struct platform_device *pdev) { struct mci_platform_data *pdata; @@ -1650,22 +1608,20 @@ static int __init atmci_probe(struct platform_device *pdev) goto err_request_irq; #ifdef CONFIG_MMC_ATMELMCI_DMA - if (pdata->dma_slave) { - struct dma_slave *slave = pdata->dma_slave; + if (pdata->dma_slave.dma_dev) { + struct dw_dma_slave *dws = &pdata->dma_slave; + dma_cap_mask_t mask; - slave->tx_reg = regs->start + MCI_TDR; - slave->rx_reg = regs->start + MCI_RDR; + dws->tx_reg = regs->start + MCI_TDR; + dws->rx_reg = regs->start + MCI_RDR; /* Try to grab a DMA channel */ - host->dma.client.event_callback = atmci_dma_event; - dma_cap_set(DMA_SLAVE, host->dma.client.cap_mask); - host->dma.client.slave = slave; - - dma_async_client_register(&host->dma.client); - dma_async_client_chan_request(&host->dma.client); - } else { - dev_notice(&pdev->dev, "DMA not available, using PIO\n"); + dma_cap_zero(mask); + dma_cap_set(DMA_SLAVE, mask); + host->dma.chan = dma_request_channel(mask, filter, dws); } + if (!host->dma.chan) + dev_notice(&pdev->dev, "DMA not available, using PIO\n"); #endif /* CONFIG_MMC_ATMELMCI_DMA */ platform_set_drvdata(pdev, host); @@ -1697,8 +1653,8 @@ static int __init atmci_probe(struct platform_device *pdev) err_init_slot: #ifdef CONFIG_MMC_ATMELMCI_DMA - if (pdata->dma_slave) - dma_async_client_unregister(&host->dma.client); + if (host->dma.chan) + dma_release_channel(host->dma.chan); #endif free_irq(irq, host); err_request_irq: @@ -1729,8 +1685,8 @@ static int __exit atmci_remove(struct platform_device *pdev) clk_disable(host->mck); #ifdef CONFIG_MMC_ATMELMCI_DMA - if (host->dma.client.slave) - dma_async_client_unregister(&host->dma.client); + if (host->dma.chan) + dma_release_channel(host->dma.chan); #endif free_irq(platform_get_irq(pdev, 0), host); @@ -1759,7 +1715,7 @@ static void __exit atmci_exit(void) platform_driver_unregister(&atmci_driver); } -module_init(atmci_init); +late_initcall(atmci_init); /* try to load after dma driver when built-in */ module_exit(atmci_exit); MODULE_DESCRIPTION("Atmel Multimedia Card Interface driver"); diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 6f2d070..d63544c 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -96,17 +96,6 @@ enum dma_transaction_type { /* last transaction type for creation of the capabilities mask */ #define DMA_TX_TYPE_END (DMA_SLAVE + 1) -/** - * enum dma_slave_width - DMA slave register access width. - * @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses - * @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses - * @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses - */ -enum dma_slave_width { - DMA_SLAVE_WIDTH_8BIT, - DMA_SLAVE_WIDTH_16BIT, - DMA_SLAVE_WIDTH_32BIT, -}; /** * enum dma_ctrl_flags - DMA flags to augment operation preparation, @@ -133,32 +122,6 @@ enum dma_ctrl_flags { typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t; /** - * struct dma_slave - Information about a DMA slave - * @dev: device acting as DMA slave - * @dma_dev: required DMA master device. If non-NULL, the client can not be - * bound to other masters than this. - * @tx_reg: physical address of data register used for - * memory-to-peripheral transfers - * @rx_reg: physical address of data register used for - * peripheral-to-memory transfers - * @reg_width: peripheral register width - * - * If dma_dev is non-NULL, the client can not be bound to other DMA - * masters than the one corresponding to this device. The DMA master - * driver may use this to determine if there is controller-specific - * data wrapped around this struct. Drivers of platform code that sets - * the dma_dev field must therefore make sure to use an appropriate - * controller-specific dma slave structure wrapping this struct. - */ -struct dma_slave { - struct device *dev; - struct device *dma_dev; - dma_addr_t tx_reg; - dma_addr_t rx_reg; - enum dma_slave_width reg_width; -}; - -/** * struct dma_chan_percpu - the per-CPU part of struct dma_chan * @refcount: local_t used for open-coded "bigref" counting * @memcpy_count: transaction counter @@ -248,7 +211,6 @@ typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filt struct dma_client { dma_event_callback event_callback; dma_cap_mask_t cap_mask; - struct dma_slave *slave; struct list_head global_node; }; diff --git a/include/linux/dw_dmac.h b/include/linux/dw_dmac.h index 04d217b..d797dde 100644 --- a/include/linux/dw_dmac.h +++ b/include/linux/dw_dmac.h @@ -22,14 +22,34 @@ struct dw_dma_platform_data { }; /** + * enum dw_dma_slave_width - DMA slave register access width. + * @DMA_SLAVE_WIDTH_8BIT: Do 8-bit slave register accesses + * @DMA_SLAVE_WIDTH_16BIT: Do 16-bit slave register accesses + * @DMA_SLAVE_WIDTH_32BIT: Do 32-bit slave register accesses + */ +enum dw_dma_slave_width { + DW_DMA_SLAVE_WIDTH_8BIT, + DW_DMA_SLAVE_WIDTH_16BIT, + DW_DMA_SLAVE_WIDTH_32BIT, +}; + +/** * struct dw_dma_slave - Controller-specific information about a slave - * @slave: Generic information about the slave - * @ctl_lo: Platform-specific initializer for the CTL_LO register + * + * @dma_dev: required DMA master device + * @tx_reg: physical address of data register used for + * memory-to-peripheral transfers + * @rx_reg: physical address of data register used for + * peripheral-to-memory transfers + * @reg_width: peripheral register width * @cfg_hi: Platform-specific initializer for the CFG_HI register * @cfg_lo: Platform-specific initializer for the CFG_LO register */ struct dw_dma_slave { - struct dma_slave slave; + struct device *dma_dev; + dma_addr_t tx_reg; + dma_addr_t rx_reg; + enum dw_dma_slave_width reg_width; u32 cfg_hi; u32 cfg_lo; }; @@ -54,9 +74,4 @@ struct dw_dma_slave { #define DWC_CFGL_HS_DST_POL (1 << 18) /* dst handshake active low */ #define DWC_CFGL_HS_SRC_POL (1 << 19) /* src handshake active low */ -static inline struct dw_dma_slave *to_dw_dma_slave(struct dma_slave *slave) -{ - return container_of(slave, struct dw_dma_slave, slave); -} - #endif /* DW_DMAC_H */ -- cgit v0.10.2 From 209b84a88fe81341b4d8d465acc4a67cb7c3feb3 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:17 -0700 Subject: dmaengine: replace dma_async_client_register with dmaengine_get Now that clients no longer need to be notified of channel arrival dma_async_client_register can simply increment the dmaengine_ref_count. Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c index 2cdf7a0..f21147f 100644 --- a/crypto/async_tx/async_tx.c +++ b/crypto/async_tx/async_tx.c @@ -28,120 +28,9 @@ #include #ifdef CONFIG_DMA_ENGINE -static enum dma_state_client -dma_channel_add_remove(struct dma_client *client, - struct dma_chan *chan, enum dma_state state); - -static struct dma_client async_tx_dma = { - .event_callback = dma_channel_add_remove, - /* .cap_mask == 0 defaults to all channels */ -}; - -/** - * async_tx_lock - protect modification of async_tx_master_list and serialize - * rebalance operations - */ -static DEFINE_SPINLOCK(async_tx_lock); - -static LIST_HEAD(async_tx_master_list); - -static void -free_dma_chan_ref(struct rcu_head *rcu) -{ - struct dma_chan_ref *ref; - ref = container_of(rcu, struct dma_chan_ref, rcu); - kfree(ref); -} - -static void -init_dma_chan_ref(struct dma_chan_ref *ref, struct dma_chan *chan) -{ - INIT_LIST_HEAD(&ref->node); - INIT_RCU_HEAD(&ref->rcu); - ref->chan = chan; - atomic_set(&ref->count, 0); -} - -static enum dma_state_client -dma_channel_add_remove(struct dma_client *client, - struct dma_chan *chan, enum dma_state state) -{ - unsigned long found, flags; - struct dma_chan_ref *master_ref, *ref; - enum dma_state_client ack = DMA_DUP; /* default: take no action */ - - switch (state) { - case DMA_RESOURCE_AVAILABLE: - found = 0; - rcu_read_lock(); - list_for_each_entry_rcu(ref, &async_tx_master_list, node) - if (ref->chan == chan) { - found = 1; - break; - } - rcu_read_unlock(); - - pr_debug("async_tx: dma resource available [%s]\n", - found ? "old" : "new"); - - if (!found) - ack = DMA_ACK; - else - break; - - /* add the channel to the generic management list */ - master_ref = kmalloc(sizeof(*master_ref), GFP_KERNEL); - if (master_ref) { - init_dma_chan_ref(master_ref, chan); - spin_lock_irqsave(&async_tx_lock, flags); - list_add_tail_rcu(&master_ref->node, - &async_tx_master_list); - spin_unlock_irqrestore(&async_tx_lock, - flags); - } else { - printk(KERN_WARNING "async_tx: unable to create" - " new master entry in response to" - " a DMA_RESOURCE_ADDED event" - " (-ENOMEM)\n"); - return 0; - } - break; - case DMA_RESOURCE_REMOVED: - found = 0; - spin_lock_irqsave(&async_tx_lock, flags); - list_for_each_entry(ref, &async_tx_master_list, node) - if (ref->chan == chan) { - list_del_rcu(&ref->node); - call_rcu(&ref->rcu, free_dma_chan_ref); - found = 1; - break; - } - spin_unlock_irqrestore(&async_tx_lock, flags); - - pr_debug("async_tx: dma resource removed [%s]\n", - found ? "ours" : "not ours"); - - if (found) - ack = DMA_ACK; - else - break; - break; - case DMA_RESOURCE_SUSPEND: - case DMA_RESOURCE_RESUME: - printk(KERN_WARNING "async_tx: does not support dma channel" - " suspend/resume\n"); - break; - default: - BUG(); - } - - return ack; -} - static int __init async_tx_init(void) { - dma_async_client_register(&async_tx_dma); - dma_async_client_chan_request(&async_tx_dma); + dmaengine_get(); printk(KERN_INFO "async_tx: api initialized (async)\n"); @@ -150,7 +39,7 @@ static int __init async_tx_init(void) static void __exit async_tx_exit(void) { - dma_async_client_unregister(&async_tx_dma); + dmaengine_put(); } /** diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 90aca50..3f1849b 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -600,10 +600,9 @@ static void dma_clients_notify_available(void) } /** - * dma_async_client_register - register a &dma_client - * @client: ptr to a client structure with valid 'event_callback' and 'cap_mask' + * dmaengine_get - register interest in dma_channels */ -void dma_async_client_register(struct dma_client *client) +void dmaengine_get(void) { struct dma_device *device, *_d; struct dma_chan *chan; @@ -634,25 +633,18 @@ void dma_async_client_register(struct dma_client *client) */ if (dmaengine_ref_count == 1) dma_channel_rebalance(); - list_add_tail(&client->global_node, &dma_client_list); mutex_unlock(&dma_list_mutex); } -EXPORT_SYMBOL(dma_async_client_register); +EXPORT_SYMBOL(dmaengine_get); /** - * dma_async_client_unregister - unregister a client and free the &dma_client - * @client: &dma_client to free - * - * Force frees any allocated DMA channels, frees the &dma_client memory + * dmaengine_put - let dma drivers be removed when ref_count == 0 */ -void dma_async_client_unregister(struct dma_client *client) +void dmaengine_put(void) { struct dma_device *device; struct dma_chan *chan; - if (!client) - return; - mutex_lock(&dma_list_mutex); dmaengine_ref_count--; BUG_ON(dmaengine_ref_count < 0); @@ -663,11 +655,9 @@ void dma_async_client_unregister(struct dma_client *client) list_for_each_entry(chan, &device->channels, device_node) dma_chan_put(chan); } - - list_del(&client->global_node); mutex_unlock(&dma_list_mutex); } -EXPORT_SYMBOL(dma_async_client_unregister); +EXPORT_SYMBOL(dmaengine_put); /** * dma_async_client_chan_request - send all available channels to the diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index d63544c..37d95db 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -318,8 +318,8 @@ struct dma_device { /* --- public DMA engine API --- */ -void dma_async_client_register(struct dma_client *client); -void dma_async_client_unregister(struct dma_client *client); +void dmaengine_get(void); +void dmaengine_put(void); void dma_async_client_chan_request(struct dma_client *client); dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan, void *dest, void *src, size_t len); diff --git a/net/core/dev.c b/net/core/dev.c index bbb07db..7596fc9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4894,8 +4894,7 @@ static int __init netdev_dma_register(void) } spin_lock_init(&net_dma.lock); dma_cap_set(DMA_MEMCPY, net_dma.client.cap_mask); - dma_async_client_register(&net_dma.client); - dma_async_client_chan_request(&net_dma.client); + dmaengine_get(); return 0; } -- cgit v0.10.2 From aa1e6f1a385eb2b04171ec841f3b760091e4a8ee Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:17 -0700 Subject: dmaengine: kill struct dma_client and supporting infrastructure All users have been converted to either the general-purpose allocator, dma_find_channel, or dma_request_channel. Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 3f1849b..9fc91f9 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -31,15 +31,12 @@ * * LOCKING: * - * The subsystem keeps two global lists, dma_device_list and dma_client_list. - * Both of these are protected by a mutex, dma_list_mutex. + * The subsystem keeps a global list of dma_device structs it is protected by a + * mutex, dma_list_mutex. * * Each device has a channels list, which runs unlocked but is never modified * once the device is registered, it's just setup by the driver. * - * Each client is responsible for keeping track of the channels it uses. See - * the definition of dma_event_callback in dmaengine.h. - * * Each device has a kref, which is initialized to 1 when the device is * registered. A kref_get is done for each device registered. When the * device is released, the corresponding kref_put is done in the release @@ -74,7 +71,6 @@ static DEFINE_MUTEX(dma_list_mutex); static LIST_HEAD(dma_device_list); -static LIST_HEAD(dma_client_list); static long dmaengine_ref_count; /* --- sysfs implementation --- */ @@ -189,7 +185,7 @@ static int dma_chan_get(struct dma_chan *chan) /* allocate upon first client reference */ if (chan->client_count == 1 && err == 0) { - int desc_cnt = chan->device->device_alloc_chan_resources(chan, NULL); + int desc_cnt = chan->device->device_alloc_chan_resources(chan); if (desc_cnt < 0) { err = desc_cnt; @@ -218,40 +214,6 @@ static void dma_chan_put(struct dma_chan *chan) chan->device->device_free_chan_resources(chan); } -/** - * dma_client_chan_alloc - try to allocate channels to a client - * @client: &dma_client - * - * Called with dma_list_mutex held. - */ -static void dma_client_chan_alloc(struct dma_client *client) -{ - struct dma_device *device; - struct dma_chan *chan; - enum dma_state_client ack; - - /* Find a channel */ - list_for_each_entry(device, &dma_device_list, global_node) { - if (dma_has_cap(DMA_PRIVATE, device->cap_mask)) - continue; - if (!dma_device_satisfies_mask(device, client->cap_mask)) - continue; - - list_for_each_entry(chan, &device->channels, device_node) { - if (!chan->client_count) - continue; - ack = client->event_callback(client, chan, - DMA_RESOURCE_AVAILABLE); - - /* we are done once this client rejects - * an available resource - */ - if (ack == DMA_NAK) - return; - } - } -} - enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie) { enum dma_status status; @@ -585,21 +547,6 @@ void dma_release_channel(struct dma_chan *chan) EXPORT_SYMBOL_GPL(dma_release_channel); /** - * dma_chans_notify_available - broadcast available channels to the clients - */ -static void dma_clients_notify_available(void) -{ - struct dma_client *client; - - mutex_lock(&dma_list_mutex); - - list_for_each_entry(client, &dma_client_list, global_node) - dma_client_chan_alloc(client); - - mutex_unlock(&dma_list_mutex); -} - -/** * dmaengine_get - register interest in dma_channels */ void dmaengine_get(void) @@ -660,19 +607,6 @@ void dmaengine_put(void) EXPORT_SYMBOL(dmaengine_put); /** - * dma_async_client_chan_request - send all available channels to the - * client that satisfy the capability mask - * @client - requester - */ -void dma_async_client_chan_request(struct dma_client *client) -{ - mutex_lock(&dma_list_mutex); - dma_client_chan_alloc(client); - mutex_unlock(&dma_list_mutex); -} -EXPORT_SYMBOL(dma_async_client_chan_request); - -/** * dma_async_device_register - registers DMA devices found * @device: &dma_device */ @@ -765,8 +699,6 @@ int dma_async_device_register(struct dma_device *device) dma_channel_rebalance(); mutex_unlock(&dma_list_mutex); - dma_clients_notify_available(); - return 0; err_out: diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c index dbd5080..a29dda8 100644 --- a/drivers/dma/dw_dmac.c +++ b/drivers/dma/dw_dmac.c @@ -758,8 +758,7 @@ static void dwc_issue_pending(struct dma_chan *chan) spin_unlock_bh(&dwc->lock); } -static int dwc_alloc_chan_resources(struct dma_chan *chan, - struct dma_client *client) +static int dwc_alloc_chan_resources(struct dma_chan *chan) { struct dw_dma_chan *dwc = to_dw_dma_chan(chan); struct dw_dma *dw = to_dw_dma(chan->device); diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c index 0b95dcc..46e0128 100644 --- a/drivers/dma/fsldma.c +++ b/drivers/dma/fsldma.c @@ -366,8 +366,7 @@ static struct fsl_desc_sw *fsl_dma_alloc_descriptor( * * Return - The number of descriptors allocated. */ -static int fsl_dma_alloc_chan_resources(struct dma_chan *chan, - struct dma_client *client) +static int fsl_dma_alloc_chan_resources(struct dma_chan *chan) { struct fsl_dma_chan *fsl_chan = to_fsl_chan(chan); diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c index 6607fdd..e42e1ae 100644 --- a/drivers/dma/ioat_dma.c +++ b/drivers/dma/ioat_dma.c @@ -734,8 +734,7 @@ static void ioat2_dma_massage_chan_desc(struct ioat_dma_chan *ioat_chan) * ioat_dma_alloc_chan_resources - returns the number of allocated descriptors * @chan: the channel to be filled out */ -static int ioat_dma_alloc_chan_resources(struct dma_chan *chan, - struct dma_client *client) +static int ioat_dma_alloc_chan_resources(struct dma_chan *chan) { struct ioat_dma_chan *ioat_chan = to_ioat_chan(chan); struct ioat_desc_sw *desc; @@ -1381,7 +1380,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (device->common.device_alloc_chan_resources(dma_chan, NULL) < 1) { + if (device->common.device_alloc_chan_resources(dma_chan) < 1) { dev_err(&device->pdev->dev, "selftest cannot allocate chan resource\n"); err = -ENODEV; diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index be9ea9f..c74ac9e 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -470,8 +470,7 @@ static void iop_chan_start_null_xor(struct iop_adma_chan *iop_chan); * greater than 2x the number slots needed to satisfy a device->max_xor * request. * */ -static int iop_adma_alloc_chan_resources(struct dma_chan *chan, - struct dma_client *client) +static int iop_adma_alloc_chan_resources(struct dma_chan *chan) { char *hw_desc; int idx; @@ -865,7 +864,7 @@ static int __devinit iop_adma_memcpy_self_test(struct iop_adma_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) { + if (iop_adma_alloc_chan_resources(dma_chan) < 1) { err = -ENODEV; goto out; } @@ -963,7 +962,7 @@ iop_adma_xor_zero_sum_self_test(struct iop_adma_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (iop_adma_alloc_chan_resources(dma_chan, NULL) < 1) { + if (iop_adma_alloc_chan_resources(dma_chan) < 1) { err = -ENODEV; goto out; } diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index 3f46df3..fbaa2f6 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -606,8 +606,7 @@ submit_done: } /* returns the number of allocated descriptors */ -static int mv_xor_alloc_chan_resources(struct dma_chan *chan, - struct dma_client *client) +static int mv_xor_alloc_chan_resources(struct dma_chan *chan) { char *hw_desc; int idx; @@ -957,7 +956,7 @@ static int __devinit mv_xor_memcpy_self_test(struct mv_xor_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (mv_xor_alloc_chan_resources(dma_chan, NULL) < 1) { + if (mv_xor_alloc_chan_resources(dma_chan) < 1) { err = -ENODEV; goto out; } @@ -1052,7 +1051,7 @@ mv_xor_xor_self_test(struct mv_xor_device *device) dma_chan = container_of(device->common.channels.next, struct dma_chan, device_node); - if (mv_xor_alloc_chan_resources(dma_chan, NULL) < 1) { + if (mv_xor_alloc_chan_resources(dma_chan) < 1) { err = -ENODEV; goto out; } diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c index 7a34118..4b567a0 100644 --- a/drivers/mmc/host/atmel-mci.c +++ b/drivers/mmc/host/atmel-mci.c @@ -55,7 +55,6 @@ enum atmel_mci_state { struct atmel_mci_dma { #ifdef CONFIG_MMC_ATMELMCI_DMA - struct dma_client client; struct dma_chan *chan; struct dma_async_tx_descriptor *data_desc; #endif diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 37d95db..db050e9 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -29,20 +29,6 @@ #include /** - * enum dma_state - resource PNP/power management state - * @DMA_RESOURCE_SUSPEND: DMA device going into low power state - * @DMA_RESOURCE_RESUME: DMA device returning to full power - * @DMA_RESOURCE_AVAILABLE: DMA device available to the system - * @DMA_RESOURCE_REMOVED: DMA device removed from the system - */ -enum dma_state { - DMA_RESOURCE_SUSPEND, - DMA_RESOURCE_RESUME, - DMA_RESOURCE_AVAILABLE, - DMA_RESOURCE_REMOVED, -}; - -/** * enum dma_state_client - state of the channel in the client * @DMA_ACK: client would like to use, or was using this channel * @DMA_DUP: client has already seen this channel, or is not using this channel @@ -170,23 +156,6 @@ struct dma_chan { void dma_chan_cleanup(struct kref *kref); -/* - * typedef dma_event_callback - function pointer to a DMA event callback - * For each channel added to the system this routine is called for each client. - * If the client would like to use the channel it returns '1' to signal (ack) - * the dmaengine core to take out a reference on the channel and its - * corresponding device. A client must not 'ack' an available channel more - * than once. When a channel is removed all clients are notified. If a client - * is using the channel it must 'ack' the removal. A client must not 'ack' a - * removed channel more than once. - * @client - 'this' pointer for the client context - * @chan - channel to be acted upon - * @state - available or removed - */ -struct dma_client; -typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client, - struct dma_chan *chan, enum dma_state state); - /** * typedef dma_filter_fn - callback filter for dma_request_channel * @chan: channel to be reviewed @@ -199,21 +168,6 @@ typedef enum dma_state_client (*dma_event_callback) (struct dma_client *client, */ typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); -/** - * struct dma_client - info on the entity making use of DMA services - * @event_callback: func ptr to call when something happens - * @cap_mask: only return channels that satisfy the requested capabilities - * a value of zero corresponds to any capability - * @slave: data for preparing slave transfer. Must be non-NULL iff the - * DMA_SLAVE capability is requested. - * @global_node: list_head for global dma_client_list - */ -struct dma_client { - dma_event_callback event_callback; - dma_cap_mask_t cap_mask; - struct list_head global_node; -}; - typedef void (*dma_async_tx_callback)(void *dma_async_param); /** * struct dma_async_tx_descriptor - async transaction descriptor @@ -285,8 +239,7 @@ struct dma_device { int dev_id; struct device *dev; - int (*device_alloc_chan_resources)(struct dma_chan *chan, - struct dma_client *client); + int (*device_alloc_chan_resources)(struct dma_chan *chan); void (*device_free_chan_resources)(struct dma_chan *chan); struct dma_async_tx_descriptor *(*device_prep_dma_memcpy)( @@ -320,7 +273,6 @@ struct dma_device { void dmaengine_get(void); void dmaengine_put(void); -void dma_async_client_chan_request(struct dma_client *client); dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan, void *dest, void *src, size_t len); dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan, diff --git a/net/core/dev.c b/net/core/dev.c index 7596fc9..ac55d84 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -167,25 +167,6 @@ static DEFINE_SPINLOCK(ptype_lock); static struct list_head ptype_base[PTYPE_HASH_SIZE] __read_mostly; static struct list_head ptype_all __read_mostly; /* Taps */ -#ifdef CONFIG_NET_DMA -struct net_dma { - struct dma_client client; - spinlock_t lock; - cpumask_t channel_mask; - struct dma_chan **channels; -}; - -static enum dma_state_client -netdev_dma_event(struct dma_client *client, struct dma_chan *chan, - enum dma_state state); - -static struct net_dma net_dma = { - .client = { - .event_callback = netdev_dma_event, - }, -}; -#endif - /* * The @dev_base_head list is protected by @dev_base_lock and the rtnl * semaphore. @@ -4826,81 +4807,6 @@ static int dev_cpu_callback(struct notifier_block *nfb, return NOTIFY_OK; } -#ifdef CONFIG_NET_DMA -/** - * netdev_dma_event - event callback for the net_dma_client - * @client: should always be net_dma_client - * @chan: DMA channel for the event - * @state: DMA state to be handled - */ -static enum dma_state_client -netdev_dma_event(struct dma_client *client, struct dma_chan *chan, - enum dma_state state) -{ - int i, found = 0, pos = -1; - struct net_dma *net_dma = - container_of(client, struct net_dma, client); - enum dma_state_client ack = DMA_DUP; /* default: take no action */ - - spin_lock(&net_dma->lock); - switch (state) { - case DMA_RESOURCE_AVAILABLE: - for (i = 0; i < nr_cpu_ids; i++) - if (net_dma->channels[i] == chan) { - found = 1; - break; - } else if (net_dma->channels[i] == NULL && pos < 0) - pos = i; - - if (!found && pos >= 0) { - ack = DMA_ACK; - net_dma->channels[pos] = chan; - cpu_set(pos, net_dma->channel_mask); - } - break; - case DMA_RESOURCE_REMOVED: - for (i = 0; i < nr_cpu_ids; i++) - if (net_dma->channels[i] == chan) { - found = 1; - pos = i; - break; - } - - if (found) { - ack = DMA_ACK; - cpu_clear(pos, net_dma->channel_mask); - net_dma->channels[i] = NULL; - } - break; - default: - break; - } - spin_unlock(&net_dma->lock); - - return ack; -} - -/** - * netdev_dma_register - register the networking subsystem as a DMA client - */ -static int __init netdev_dma_register(void) -{ - net_dma.channels = kzalloc(nr_cpu_ids * sizeof(struct net_dma), - GFP_KERNEL); - if (unlikely(!net_dma.channels)) { - printk(KERN_NOTICE - "netdev_dma: no memory for net_dma.channels\n"); - return -ENOMEM; - } - spin_lock_init(&net_dma.lock); - dma_cap_set(DMA_MEMCPY, net_dma.client.cap_mask); - dmaengine_get(); - return 0; -} - -#else -static int __init netdev_dma_register(void) { return -ENODEV; } -#endif /* CONFIG_NET_DMA */ /** * netdev_increment_features - increment feature set by one @@ -5120,14 +5026,15 @@ static int __init net_dev_init(void) if (register_pernet_device(&default_device_ops)) goto out; - netdev_dma_register(); - open_softirq(NET_TX_SOFTIRQ, net_tx_action); open_softirq(NET_RX_SOFTIRQ, net_rx_action); hotcpu_notifier(dev_cpu_callback, 0); dst_init(); dev_mcast_init(); + #ifdef CONFIG_NET_DMA + dmaengine_get(); + #endif rc = 0; out: return rc; -- cgit v0.10.2 From f27c580c3628d79b17f38976d842a6d7f3616e2e Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:18 -0700 Subject: dmaengine: remove 'bigref' infrastructure Reference counting is done at the module level so clients need not worry that a channel will leave while they are actively using dmaengine. Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 9fc91f9..b245c38 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -34,26 +34,15 @@ * The subsystem keeps a global list of dma_device structs it is protected by a * mutex, dma_list_mutex. * + * A subsystem can get access to a channel by calling dmaengine_get() followed + * by dma_find_channel(), or if it has need for an exclusive channel it can call + * dma_request_channel(). Once a channel is allocated a reference is taken + * against its corresponding driver to disable removal. + * * Each device has a channels list, which runs unlocked but is never modified * once the device is registered, it's just setup by the driver. * - * Each device has a kref, which is initialized to 1 when the device is - * registered. A kref_get is done for each device registered. When the - * device is released, the corresponding kref_put is done in the release - * method. Every time one of the device's channels is allocated to a client, - * a kref_get occurs. When the channel is freed, the corresponding kref_put - * happens. The device's release function does a completion, so - * unregister_device does a remove event, device_unregister, a kref_put - * for the first reference, then waits on the completion for all other - * references to finish. - * - * Each channel has an open-coded implementation of Rusty Russell's "bigref," - * with a kref and a per_cpu local_t. A dma_chan_get is called when a client - * signals that it wants to use a channel, and dma_chan_put is called when - * a channel is removed or a client using it is unregistered. A client can - * take extra references per outstanding transaction, as is the case with - * the NET DMA client. The release function does a kref_put on the device. - * -ChrisL, DanW + * See Documentation/dmaengine.txt for more details */ #include @@ -114,18 +103,9 @@ static struct device_attribute dma_attrs[] = { __ATTR_NULL }; -static void dma_async_device_cleanup(struct kref *kref); - -static void dma_dev_release(struct device *dev) -{ - struct dma_chan *chan = to_dma_chan(dev); - kref_put(&chan->device->refcount, dma_async_device_cleanup); -} - static struct class dma_devclass = { .name = "dma", .dev_attrs = dma_attrs, - .dev_release = dma_dev_release, }; /* --- client and device registration --- */ @@ -233,29 +213,6 @@ enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie) EXPORT_SYMBOL(dma_sync_wait); /** - * dma_chan_cleanup - release a DMA channel's resources - * @kref: kernel reference structure that contains the DMA channel device - */ -void dma_chan_cleanup(struct kref *kref) -{ - struct dma_chan *chan = container_of(kref, struct dma_chan, refcount); - kref_put(&chan->device->refcount, dma_async_device_cleanup); -} -EXPORT_SYMBOL(dma_chan_cleanup); - -static void dma_chan_free_rcu(struct rcu_head *rcu) -{ - struct dma_chan *chan = container_of(rcu, struct dma_chan, rcu); - - kref_put(&chan->refcount, dma_chan_cleanup); -} - -static void dma_chan_release(struct dma_chan *chan) -{ - call_rcu(&chan->rcu, dma_chan_free_rcu); -} - -/** * dma_cap_mask_all - enable iteration over all operation types */ static dma_cap_mask_t dma_cap_mask_all; @@ -641,9 +598,6 @@ int dma_async_device_register(struct dma_device *device) BUG_ON(!device->device_issue_pending); BUG_ON(!device->dev); - init_completion(&device->done); - kref_init(&device->refcount); - mutex_lock(&dma_list_mutex); device->dev_id = id++; mutex_unlock(&dma_list_mutex); @@ -662,19 +616,11 @@ int dma_async_device_register(struct dma_device *device) rc = device_register(&chan->dev); if (rc) { - chancnt--; free_percpu(chan->local); chan->local = NULL; goto err_out; } - - /* One for the channel, one of the class device */ - kref_get(&device->refcount); - kref_get(&device->refcount); - kref_init(&chan->refcount); chan->client_count = 0; - chan->slow_ref = 0; - INIT_RCU_HEAD(&chan->rcu); } device->chancnt = chancnt; @@ -705,9 +651,7 @@ err_out: list_for_each_entry(chan, &device->channels, device_node) { if (chan->local == NULL) continue; - kref_put(&device->refcount, dma_async_device_cleanup); device_unregister(&chan->dev); - chancnt--; free_percpu(chan->local); } return rc; @@ -715,20 +659,11 @@ err_out: EXPORT_SYMBOL(dma_async_device_register); /** - * dma_async_device_cleanup - function called when all references are released - * @kref: kernel reference object - */ -static void dma_async_device_cleanup(struct kref *kref) -{ - struct dma_device *device; - - device = container_of(kref, struct dma_device, refcount); - complete(&device->done); -} - -/** * dma_async_device_unregister - unregister a DMA device * @device: &dma_device + * + * This routine is called by dma driver exit routines, dmaengine holds module + * references to prevent it being called while channels are in use. */ void dma_async_device_unregister(struct dma_device *device) { @@ -744,11 +679,7 @@ void dma_async_device_unregister(struct dma_device *device) "%s called while %d clients hold a reference\n", __func__, chan->client_count); device_unregister(&chan->dev); - dma_chan_release(chan); } - - kref_put(&device->refcount, dma_async_device_cleanup); - wait_for_completion(&device->done); } EXPORT_SYMBOL(dma_async_device_unregister); diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index c74ac9e..df0e37d 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -1253,7 +1253,6 @@ static int __devinit iop_adma_probe(struct platform_device *pdev) spin_lock_init(&iop_chan->lock); INIT_LIST_HEAD(&iop_chan->chain); INIT_LIST_HEAD(&iop_chan->all_slots); - INIT_RCU_HEAD(&iop_chan->common.rcu); iop_chan->common.device = dma_dev; list_add_tail(&iop_chan->common.device_node, &dma_dev->channels); diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c index fbaa2f6..d35cbd1 100644 --- a/drivers/dma/mv_xor.c +++ b/drivers/dma/mv_xor.c @@ -1219,7 +1219,6 @@ static int __devinit mv_xor_probe(struct platform_device *pdev) INIT_LIST_HEAD(&mv_chan->chain); INIT_LIST_HEAD(&mv_chan->completed_slots); INIT_LIST_HEAD(&mv_chan->all_slots); - INIT_RCU_HEAD(&mv_chan->common.rcu); mv_chan->common.device = dma_dev; list_add_tail(&mv_chan->common.device_node, &dma_dev->channels); diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index db050e9..bca2fc7 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -142,10 +142,6 @@ struct dma_chan { int chan_id; struct device dev; - struct kref refcount; - int slow_ref; - struct rcu_head rcu; - struct list_head device_node; struct dma_chan_percpu *local; int client_count; @@ -233,9 +229,6 @@ struct dma_device { dma_cap_mask_t cap_mask; int max_xor; - struct kref refcount; - struct completion done; - int dev_id; struct device *dev; -- cgit v0.10.2 From 7dd602510128d7a64b11ff3b7d4f30ac8e3946ce Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:19 -0700 Subject: dmaengine: kill enum dma_state_client DMA_NAK is now useless. We can just use a bool instead. Reviewed-by: Andrew Morton Signed-off-by: Dan Williams diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index b245c38..cdc8ecf 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -440,7 +440,7 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v { struct dma_device *device, *_d; struct dma_chan *chan = NULL; - enum dma_state_client ack; + bool ack; int err; /* Find a channel */ @@ -453,9 +453,9 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v if (fn) ack = fn(chan, fn_param); else - ack = DMA_ACK; + ack = true; - if (ack == DMA_ACK) { + if (ack) { /* Found a suitable channel, try to grab, prep, and * return it. We first set DMA_PRIVATE to disable * balance_ref_count as this channel will not be @@ -473,15 +473,9 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v dev_name(&chan->dev), err); else break; - } else if (ack == DMA_DUP) { - pr_debug("%s: %s filter said DMA_DUP\n", - __func__, dev_name(&chan->dev)); - } else if (ack == DMA_NAK) { - pr_debug("%s: %s filter said DMA_NAK\n", - __func__, dev_name(&chan->dev)); - break; } else - WARN_ONCE(1, "filter_fn: unknown response?\n"); + pr_debug("%s: %s filter said false\n", + __func__, dev_name(&chan->dev)); chan = NULL; } mutex_unlock(&dma_list_mutex); diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c index 1d6e48f..c77d47c 100644 --- a/drivers/dma/dmatest.c +++ b/drivers/dma/dmatest.c @@ -363,12 +363,12 @@ static int dmatest_add_channel(struct dma_chan *chan) return 0; } -static enum dma_state_client filter(struct dma_chan *chan, void *param) +static bool filter(struct dma_chan *chan, void *param) { if (!dmatest_match_channel(chan) || !dmatest_match_device(chan->device)) - return DMA_DUP; + return false; else - return DMA_ACK; + return true; } static int __init dmatest_init(void) diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c index 4b567a0..b0042d0 100644 --- a/drivers/mmc/host/atmel-mci.c +++ b/drivers/mmc/host/atmel-mci.c @@ -1544,14 +1544,14 @@ static void __exit atmci_cleanup_slot(struct atmel_mci_slot *slot, } #ifdef CONFIG_MMC_ATMELMCI_DMA -static enum dma_state_client filter(struct dma_chan *chan, void *slave) +static bool filter(struct dma_chan *chan, void *slave) { struct dw_dma_slave *dws = slave; if (dws->dma_dev == chan->device->dev) - return DMA_ACK; + return true; else - return DMA_DUP; + return false; } #endif diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index bca2fc7..1419a50 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -29,18 +29,6 @@ #include /** - * enum dma_state_client - state of the channel in the client - * @DMA_ACK: client would like to use, or was using this channel - * @DMA_DUP: client has already seen this channel, or is not using this channel - * @DMA_NAK: client does not want to see any more channels - */ -enum dma_state_client { - DMA_ACK, - DMA_DUP, - DMA_NAK, -}; - -/** * typedef dma_cookie_t - an opaque DMA cookie * * if dma_cookie_t is >0 it's a DMA request cookie, <0 it's an error code @@ -160,9 +148,10 @@ void dma_chan_cleanup(struct kref *kref); * When this optional parameter is specified in a call to dma_request_channel a * suitable channel is passed to this routine for further dispositioning before * being returned. Where 'suitable' indicates a non-busy channel that - * satisfies the given capability mask. + * satisfies the given capability mask. It returns 'true' to indicate that the + * channel is suitable. */ -typedef enum dma_state_client (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); +typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); typedef void (*dma_async_tx_callback)(void *dma_async_param); /** -- cgit v0.10.2 From f38822033d9eafd8a7b12dd7ad6dea26480ba339 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:19 -0700 Subject: iop-adma: let devm do its job, don't duplicate free No need to free stuff that the devm infrastructure will take care of... Signed-off-by: Dan Williams diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index df0e37d..0c6bbf5 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -1113,26 +1113,13 @@ static int __devexit iop_adma_remove(struct platform_device *dev) struct iop_adma_device *device = platform_get_drvdata(dev); struct dma_chan *chan, *_chan; struct iop_adma_chan *iop_chan; - int i; struct iop_adma_platform_data *plat_data = dev->dev.platform_data; dma_async_device_unregister(&device->common); - for (i = 0; i < 3; i++) { - unsigned int irq; - irq = platform_get_irq(dev, i); - free_irq(irq, device); - } - dma_free_coherent(&dev->dev, plat_data->pool_size, device->dma_desc_pool_virt, device->dma_desc_pool); - do { - struct resource *res; - res = platform_get_resource(dev, IORESOURCE_MEM, 0); - release_mem_region(res->start, res->end - res->start); - } while (0); - list_for_each_entry_safe(chan, _chan, &device->common.channels, device_node) { iop_chan = to_iop_adma_chan(chan); -- cgit v0.10.2 From 0d603f611d6515049fbceb0267ded43c33b95451 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:20 -0700 Subject: iop-adma: kill debug BUG_ON This BUG_ON caught problems in early development but now it is in the way as it invalidly triggers when trying to remove the module. Signed-off-by: Dan Williams diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index 0c6bbf5..e82e39e 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -269,8 +269,6 @@ static void __iop_adma_slot_cleanup(struct iop_adma_chan *iop_chan) break; } - BUG_ON(!seen_current); - if (cookie > 0) { iop_chan->completed_cookie = cookie; pr_debug("\tcompleted cookie %d\n", cookie); -- cgit v0.10.2 From 630738b9a52bee40cba685f4ff43fbbc28f2e1ff Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:20 -0700 Subject: iop-adma: enable module removal Signed-off-by: Dan Williams diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index e82e39e..ea5440d 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -1413,16 +1413,12 @@ static int __init iop_adma_init (void) return platform_driver_register(&iop_adma_driver); } -/* it's currently unsafe to unload this module */ -#if 0 static void __exit iop_adma_exit (void) { platform_driver_unregister(&iop_adma_driver); return; } module_exit(iop_adma_exit); -#endif - module_init(iop_adma_init); MODULE_AUTHOR("Intel Corporation"); -- cgit v0.10.2 From 4fac7fa57cf8001be259688468c825f836daf739 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:20 -0700 Subject: ioat: do not perform removal actions at shutdown Unregistering services should only happen at "remove" time. This prevents the device from being unregistered while dmaengine clients are still active. Also, the comment on ioat_remove is stale since removal is prevented while a channel may be in use. Reported-by: Alexander Beregalov Signed-off-by: Dan Williams diff --git a/drivers/dma/ioat.c b/drivers/dma/ioat.c index 9b16a3a..4105d65 100644 --- a/drivers/dma/ioat.c +++ b/drivers/dma/ioat.c @@ -75,60 +75,10 @@ static int ioat_dca_enabled = 1; module_param(ioat_dca_enabled, int, 0644); MODULE_PARM_DESC(ioat_dca_enabled, "control support of dca service (default: 1)"); -static int ioat_setup_functionality(struct pci_dev *pdev, void __iomem *iobase) -{ - struct ioat_device *device = pci_get_drvdata(pdev); - u8 version; - int err = 0; - - version = readb(iobase + IOAT_VER_OFFSET); - switch (version) { - case IOAT_VER_1_2: - device->dma = ioat_dma_probe(pdev, iobase); - if (device->dma && ioat_dca_enabled) - device->dca = ioat_dca_init(pdev, iobase); - break; - case IOAT_VER_2_0: - device->dma = ioat_dma_probe(pdev, iobase); - if (device->dma && ioat_dca_enabled) - device->dca = ioat2_dca_init(pdev, iobase); - break; - case IOAT_VER_3_0: - device->dma = ioat_dma_probe(pdev, iobase); - if (device->dma && ioat_dca_enabled) - device->dca = ioat3_dca_init(pdev, iobase); - break; - default: - err = -ENODEV; - break; - } - if (!device->dma) - err = -ENODEV; - return err; -} - -static void ioat_shutdown_functionality(struct pci_dev *pdev) -{ - struct ioat_device *device = pci_get_drvdata(pdev); - - dev_err(&pdev->dev, "Removing dma and dca services\n"); - if (device->dca) { - unregister_dca_provider(device->dca); - free_dca_provider(device->dca); - device->dca = NULL; - } - - if (device->dma) { - ioat_dma_remove(device->dma); - device->dma = NULL; - } -} - static struct pci_driver ioat_pci_driver = { .name = "ioatdma", .id_table = ioat_pci_tbl, .probe = ioat_probe, - .shutdown = ioat_shutdown_functionality, .remove = __devexit_p(ioat_remove), }; @@ -179,7 +129,29 @@ static int __devinit ioat_probe(struct pci_dev *pdev, pci_set_master(pdev); - err = ioat_setup_functionality(pdev, iobase); + switch (readb(iobase + IOAT_VER_OFFSET)) { + case IOAT_VER_1_2: + device->dma = ioat_dma_probe(pdev, iobase); + if (device->dma && ioat_dca_enabled) + device->dca = ioat_dca_init(pdev, iobase); + break; + case IOAT_VER_2_0: + device->dma = ioat_dma_probe(pdev, iobase); + if (device->dma && ioat_dca_enabled) + device->dca = ioat2_dca_init(pdev, iobase); + break; + case IOAT_VER_3_0: + device->dma = ioat_dma_probe(pdev, iobase); + if (device->dma && ioat_dca_enabled) + device->dca = ioat3_dca_init(pdev, iobase); + break; + default: + err = -ENODEV; + break; + } + if (!device->dma) + err = -ENODEV; + if (err) goto err_version; @@ -198,17 +170,21 @@ err_enable_device: return err; } -/* - * It is unsafe to remove this module: if removed while a requested - * dma is outstanding, esp. from tcp, it is possible to hang while - * waiting for something that will never finish. However, if you're - * feeling lucky, this usually works just fine. - */ static void __devexit ioat_remove(struct pci_dev *pdev) { struct ioat_device *device = pci_get_drvdata(pdev); - ioat_shutdown_functionality(pdev); + dev_err(&pdev->dev, "Removing dma and dca services\n"); + if (device->dca) { + unregister_dca_provider(device->dca); + free_dca_provider(device->dca); + device->dca = NULL; + } + + if (device->dma) { + ioat_dma_remove(device->dma); + device->dma = NULL; + } kfree(device); } -- cgit v0.10.2 From 41d5e59c1299f27983977bcfe3b360600996051c Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:21 -0700 Subject: dmaengine: add a release for dma class devices and dependent infrastructure Resolves: WARNING: at drivers/base/core.c:122 device_release+0x4d/0x52() Device 'dma0chan0' does not have a release() function, it is broken and must be fixed. The dma_chan_dev object is introduced to gear-match sysfs kobject and dmaengine channel lifetimes. When a channel is removed access to the sysfs entries return -ENODEV until the kobject can be released. The bulk of the change is updates to existing code to handle the extra layer of indirection between a dma_chan and its struct device. Reported-by: Alexander Beregalov Acked-by: Stephen Hemminger Cc: Haavard Skinnemoen Signed-off-by: Dan Williams diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index cdc8ecf..93c4c9a 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -64,36 +64,75 @@ static long dmaengine_ref_count; /* --- sysfs implementation --- */ +/** + * dev_to_dma_chan - convert a device pointer to the its sysfs container object + * @dev - device node + * + * Must be called under dma_list_mutex + */ +static struct dma_chan *dev_to_dma_chan(struct device *dev) +{ + struct dma_chan_dev *chan_dev; + + chan_dev = container_of(dev, typeof(*chan_dev), device); + return chan_dev->chan; +} + static ssize_t show_memcpy_count(struct device *dev, struct device_attribute *attr, char *buf) { - struct dma_chan *chan = to_dma_chan(dev); + struct dma_chan *chan; unsigned long count = 0; int i; + int err; - for_each_possible_cpu(i) - count += per_cpu_ptr(chan->local, i)->memcpy_count; + mutex_lock(&dma_list_mutex); + chan = dev_to_dma_chan(dev); + if (chan) { + for_each_possible_cpu(i) + count += per_cpu_ptr(chan->local, i)->memcpy_count; + err = sprintf(buf, "%lu\n", count); + } else + err = -ENODEV; + mutex_unlock(&dma_list_mutex); - return sprintf(buf, "%lu\n", count); + return err; } static ssize_t show_bytes_transferred(struct device *dev, struct device_attribute *attr, char *buf) { - struct dma_chan *chan = to_dma_chan(dev); + struct dma_chan *chan; unsigned long count = 0; int i; + int err; - for_each_possible_cpu(i) - count += per_cpu_ptr(chan->local, i)->bytes_transferred; + mutex_lock(&dma_list_mutex); + chan = dev_to_dma_chan(dev); + if (chan) { + for_each_possible_cpu(i) + count += per_cpu_ptr(chan->local, i)->bytes_transferred; + err = sprintf(buf, "%lu\n", count); + } else + err = -ENODEV; + mutex_unlock(&dma_list_mutex); - return sprintf(buf, "%lu\n", count); + return err; } static ssize_t show_in_use(struct device *dev, struct device_attribute *attr, char *buf) { - struct dma_chan *chan = to_dma_chan(dev); + struct dma_chan *chan; + int err; - return sprintf(buf, "%d\n", chan->client_count); + mutex_lock(&dma_list_mutex); + chan = dev_to_dma_chan(dev); + if (chan) + err = sprintf(buf, "%d\n", chan->client_count); + else + err = -ENODEV; + mutex_unlock(&dma_list_mutex); + + return err; } static struct device_attribute dma_attrs[] = { @@ -103,9 +142,18 @@ static struct device_attribute dma_attrs[] = { __ATTR_NULL }; +static void chan_dev_release(struct device *dev) +{ + struct dma_chan_dev *chan_dev; + + chan_dev = container_of(dev, typeof(*chan_dev), device); + kfree(chan_dev); +} + static struct class dma_devclass = { .name = "dma", .dev_attrs = dma_attrs, + .dev_release = chan_dev_release, }; /* --- client and device registration --- */ @@ -420,7 +468,7 @@ static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_devic list_for_each_entry(chan, &dev->channels, device_node) { if (chan->client_count) { pr_debug("%s: %s busy\n", - __func__, dev_name(&chan->dev)); + __func__, dma_chan_name(chan)); continue; } ret = chan; @@ -466,22 +514,22 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v if (err == -ENODEV) { pr_debug("%s: %s module removed\n", __func__, - dev_name(&chan->dev)); + dma_chan_name(chan)); list_del_rcu(&device->global_node); } else if (err) pr_err("dmaengine: failed to get %s: (%d)\n", - dev_name(&chan->dev), err); + dma_chan_name(chan), err); else break; } else pr_debug("%s: %s filter said false\n", - __func__, dev_name(&chan->dev)); + __func__, dma_chan_name(chan)); chan = NULL; } mutex_unlock(&dma_list_mutex); pr_debug("%s: %s (%s)\n", __func__, chan ? "success" : "fail", - chan ? dev_name(&chan->dev) : NULL); + chan ? dma_chan_name(chan) : NULL); return chan; } @@ -521,7 +569,7 @@ void dmaengine_get(void) break; } else if (err) pr_err("dmaengine: failed to get %s: (%d)\n", - dev_name(&chan->dev), err); + dma_chan_name(chan), err); } } @@ -601,14 +649,20 @@ int dma_async_device_register(struct dma_device *device) chan->local = alloc_percpu(typeof(*chan->local)); if (chan->local == NULL) continue; + chan->dev = kzalloc(sizeof(*chan->dev), GFP_KERNEL); + if (chan->dev == NULL) { + free_percpu(chan->local); + continue; + } chan->chan_id = chancnt++; - chan->dev.class = &dma_devclass; - chan->dev.parent = device->dev; - dev_set_name(&chan->dev, "dma%dchan%d", + chan->dev->device.class = &dma_devclass; + chan->dev->device.parent = device->dev; + chan->dev->chan = chan; + dev_set_name(&chan->dev->device, "dma%dchan%d", device->dev_id, chan->chan_id); - rc = device_register(&chan->dev); + rc = device_register(&chan->dev->device); if (rc) { free_percpu(chan->local); chan->local = NULL; @@ -645,7 +699,10 @@ err_out: list_for_each_entry(chan, &device->channels, device_node) { if (chan->local == NULL) continue; - device_unregister(&chan->dev); + mutex_lock(&dma_list_mutex); + chan->dev->chan = NULL; + mutex_unlock(&dma_list_mutex); + device_unregister(&chan->dev->device); free_percpu(chan->local); } return rc; @@ -672,7 +729,10 @@ void dma_async_device_unregister(struct dma_device *device) WARN_ONCE(chan->client_count, "%s called while %d clients hold a reference\n", __func__, chan->client_count); - device_unregister(&chan->dev); + mutex_lock(&dma_list_mutex); + chan->dev->chan = NULL; + mutex_unlock(&dma_list_mutex); + device_unregister(&chan->dev->device); } } EXPORT_SYMBOL(dma_async_device_unregister); @@ -845,7 +905,7 @@ dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) return DMA_SUCCESS; WARN_ONCE(tx->parent, "%s: speculatively walking dependency chain for" - " %s\n", __func__, dev_name(&tx->chan->dev)); + " %s\n", __func__, dma_chan_name(tx->chan)); /* poll through the dependency chain, return when tx is complete */ do { diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c index c77d47c..3603f1e 100644 --- a/drivers/dma/dmatest.c +++ b/drivers/dma/dmatest.c @@ -80,7 +80,7 @@ static bool dmatest_match_channel(struct dma_chan *chan) { if (test_channel[0] == '\0') return true; - return strcmp(dev_name(&chan->dev), test_channel) == 0; + return strcmp(dma_chan_name(chan), test_channel) == 0; } static bool dmatest_match_device(struct dma_device *device) @@ -325,7 +325,7 @@ static int dmatest_add_channel(struct dma_chan *chan) dtc = kmalloc(sizeof(struct dmatest_chan), GFP_KERNEL); if (!dtc) { - pr_warning("dmatest: No memory for %s\n", dev_name(&chan->dev)); + pr_warning("dmatest: No memory for %s\n", dma_chan_name(chan)); return -ENOMEM; } @@ -336,16 +336,16 @@ static int dmatest_add_channel(struct dma_chan *chan) thread = kzalloc(sizeof(struct dmatest_thread), GFP_KERNEL); if (!thread) { pr_warning("dmatest: No memory for %s-test%u\n", - dev_name(&chan->dev), i); + dma_chan_name(chan), i); break; } thread->chan = dtc->chan; smp_wmb(); thread->task = kthread_run(dmatest_func, thread, "%s-test%u", - dev_name(&chan->dev), i); + dma_chan_name(chan), i); if (IS_ERR(thread->task)) { pr_warning("dmatest: Failed to run thread %s-test%u\n", - dev_name(&chan->dev), i); + dma_chan_name(chan), i); kfree(thread); break; } @@ -355,7 +355,7 @@ static int dmatest_add_channel(struct dma_chan *chan) list_add_tail(&thread->node, &dtc->threads); } - pr_info("dmatest: Started %u threads using %s\n", i, dev_name(&chan->dev)); + pr_info("dmatest: Started %u threads using %s\n", i, dma_chan_name(chan)); list_add_tail(&dtc->node, &dmatest_channels); nr_channels++; @@ -408,7 +408,7 @@ static void __exit dmatest_exit(void) list_del(&dtc->node); dmatest_cleanup_channel(dtc); pr_debug("dmatest: dropped channel %s\n", - dev_name(&dtc->chan->dev)); + dma_chan_name(dtc->chan)); dma_release_channel(dtc->chan); } } diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c index a29dda8..6b702cc 100644 --- a/drivers/dma/dw_dmac.c +++ b/drivers/dma/dw_dmac.c @@ -70,6 +70,15 @@ * the controller, though. */ +static struct device *chan2dev(struct dma_chan *chan) +{ + return &chan->dev->device; +} +static struct device *chan2parent(struct dma_chan *chan) +{ + return chan->dev->device.parent; +} + static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc) { return list_entry(dwc->active_list.next, struct dw_desc, desc_node); @@ -93,12 +102,12 @@ static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc) ret = desc; break; } - dev_dbg(&dwc->chan.dev, "desc %p not ACKed\n", desc); + dev_dbg(chan2dev(&dwc->chan), "desc %p not ACKed\n", desc); i++; } spin_unlock_bh(&dwc->lock); - dev_vdbg(&dwc->chan.dev, "scanned %u descriptors on freelist\n", i); + dev_vdbg(chan2dev(&dwc->chan), "scanned %u descriptors on freelist\n", i); return ret; } @@ -108,10 +117,10 @@ static void dwc_sync_desc_for_cpu(struct dw_dma_chan *dwc, struct dw_desc *desc) struct dw_desc *child; list_for_each_entry(child, &desc->txd.tx_list, desc_node) - dma_sync_single_for_cpu(dwc->chan.dev.parent, + dma_sync_single_for_cpu(chan2parent(&dwc->chan), child->txd.phys, sizeof(child->lli), DMA_TO_DEVICE); - dma_sync_single_for_cpu(dwc->chan.dev.parent, + dma_sync_single_for_cpu(chan2parent(&dwc->chan), desc->txd.phys, sizeof(desc->lli), DMA_TO_DEVICE); } @@ -129,11 +138,11 @@ static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc *desc) spin_lock_bh(&dwc->lock); list_for_each_entry(child, &desc->txd.tx_list, desc_node) - dev_vdbg(&dwc->chan.dev, + dev_vdbg(chan2dev(&dwc->chan), "moving child desc %p to freelist\n", child); list_splice_init(&desc->txd.tx_list, &dwc->free_list); - dev_vdbg(&dwc->chan.dev, "moving desc %p to freelist\n", desc); + dev_vdbg(chan2dev(&dwc->chan), "moving desc %p to freelist\n", desc); list_add(&desc->desc_node, &dwc->free_list); spin_unlock_bh(&dwc->lock); } @@ -163,9 +172,9 @@ static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first) /* ASSERT: channel is idle */ if (dma_readl(dw, CH_EN) & dwc->mask) { - dev_err(&dwc->chan.dev, + dev_err(chan2dev(&dwc->chan), "BUG: Attempted to start non-idle channel\n"); - dev_err(&dwc->chan.dev, + dev_err(chan2dev(&dwc->chan), " SAR: 0x%x DAR: 0x%x LLP: 0x%x CTL: 0x%x:%08x\n", channel_readl(dwc, SAR), channel_readl(dwc, DAR), @@ -193,7 +202,7 @@ dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc) void *param; struct dma_async_tx_descriptor *txd = &desc->txd; - dev_vdbg(&dwc->chan.dev, "descriptor %u complete\n", txd->cookie); + dev_vdbg(chan2dev(&dwc->chan), "descriptor %u complete\n", txd->cookie); dwc->completed = txd->cookie; callback = txd->callback; @@ -208,11 +217,11 @@ dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc) * mapped before they were submitted... */ if (!(txd->flags & DMA_COMPL_SKIP_DEST_UNMAP)) - dma_unmap_page(dwc->chan.dev.parent, desc->lli.dar, desc->len, - DMA_FROM_DEVICE); + dma_unmap_page(chan2parent(&dwc->chan), desc->lli.dar, + desc->len, DMA_FROM_DEVICE); if (!(txd->flags & DMA_COMPL_SKIP_SRC_UNMAP)) - dma_unmap_page(dwc->chan.dev.parent, desc->lli.sar, desc->len, - DMA_TO_DEVICE); + dma_unmap_page(chan2parent(&dwc->chan), desc->lli.sar, + desc->len, DMA_TO_DEVICE); /* * The API requires that no submissions are done from a @@ -228,7 +237,7 @@ static void dwc_complete_all(struct dw_dma *dw, struct dw_dma_chan *dwc) LIST_HEAD(list); if (dma_readl(dw, CH_EN) & dwc->mask) { - dev_err(&dwc->chan.dev, + dev_err(chan2dev(&dwc->chan), "BUG: XFER bit set, but channel not idle!\n"); /* Try to continue after resetting the channel... */ @@ -273,7 +282,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) return; } - dev_vdbg(&dwc->chan.dev, "scan_descriptors: llp=0x%x\n", llp); + dev_vdbg(chan2dev(&dwc->chan), "scan_descriptors: llp=0x%x\n", llp); list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) { if (desc->lli.llp == llp) @@ -292,7 +301,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) dwc_descriptor_complete(dwc, desc); } - dev_err(&dwc->chan.dev, + dev_err(chan2dev(&dwc->chan), "BUG: All descriptors done, but channel not idle!\n"); /* Try to continue after resetting the channel... */ @@ -308,7 +317,7 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc) static void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli) { - dev_printk(KERN_CRIT, &dwc->chan.dev, + dev_printk(KERN_CRIT, chan2dev(&dwc->chan), " desc: s0x%x d0x%x l0x%x c0x%x:%x\n", lli->sar, lli->dar, lli->llp, lli->ctlhi, lli->ctllo); @@ -342,9 +351,9 @@ static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc) * controller flagged an error instead of scribbling over * random memory locations. */ - dev_printk(KERN_CRIT, &dwc->chan.dev, + dev_printk(KERN_CRIT, chan2dev(&dwc->chan), "Bad descriptor submitted for DMA!\n"); - dev_printk(KERN_CRIT, &dwc->chan.dev, + dev_printk(KERN_CRIT, chan2dev(&dwc->chan), " cookie: %d\n", bad_desc->txd.cookie); dwc_dump_lli(dwc, &bad_desc->lli); list_for_each_entry(child, &bad_desc->txd.tx_list, desc_node) @@ -442,12 +451,12 @@ static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx) * for DMA. But this is hard to do in a race-free manner. */ if (list_empty(&dwc->active_list)) { - dev_vdbg(&tx->chan->dev, "tx_submit: started %u\n", + dev_vdbg(chan2dev(tx->chan), "tx_submit: started %u\n", desc->txd.cookie); dwc_dostart(dwc, desc); list_add_tail(&desc->desc_node, &dwc->active_list); } else { - dev_vdbg(&tx->chan->dev, "tx_submit: queued %u\n", + dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n", desc->txd.cookie); list_add_tail(&desc->desc_node, &dwc->queue); @@ -472,11 +481,11 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, unsigned int dst_width; u32 ctllo; - dev_vdbg(&chan->dev, "prep_dma_memcpy d0x%x s0x%x l0x%zx f0x%lx\n", + dev_vdbg(chan2dev(chan), "prep_dma_memcpy d0x%x s0x%x l0x%zx f0x%lx\n", dest, src, len, flags); if (unlikely(!len)) { - dev_dbg(&chan->dev, "prep_dma_memcpy: length is zero!\n"); + dev_dbg(chan2dev(chan), "prep_dma_memcpy: length is zero!\n"); return NULL; } @@ -516,7 +525,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, first = desc; } else { prev->lli.llp = desc->txd.phys; - dma_sync_single_for_device(chan->dev.parent, + dma_sync_single_for_device(chan2parent(chan), prev->txd.phys, sizeof(prev->lli), DMA_TO_DEVICE); list_add_tail(&desc->desc_node, @@ -531,7 +540,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, prev->lli.ctllo |= DWC_CTLL_INT_EN; prev->lli.llp = 0; - dma_sync_single_for_device(chan->dev.parent, + dma_sync_single_for_device(chan2parent(chan), prev->txd.phys, sizeof(prev->lli), DMA_TO_DEVICE); @@ -562,7 +571,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, struct scatterlist *sg; size_t total_len = 0; - dev_vdbg(&chan->dev, "prep_dma_slave\n"); + dev_vdbg(chan2dev(chan), "prep_dma_slave\n"); if (unlikely(!dws || !sg_len)) return NULL; @@ -570,7 +579,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, reg_width = dws->reg_width; prev = first = NULL; - sg_len = dma_map_sg(chan->dev.parent, sgl, sg_len, direction); + sg_len = dma_map_sg(chan2parent(chan), sgl, sg_len, direction); switch (direction) { case DMA_TO_DEVICE: @@ -587,7 +596,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, desc = dwc_desc_get(dwc); if (!desc) { - dev_err(&chan->dev, + dev_err(chan2dev(chan), "not enough descriptors available\n"); goto err_desc_get; } @@ -607,7 +616,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, first = desc; } else { prev->lli.llp = desc->txd.phys; - dma_sync_single_for_device(chan->dev.parent, + dma_sync_single_for_device(chan2parent(chan), prev->txd.phys, sizeof(prev->lli), DMA_TO_DEVICE); @@ -633,7 +642,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, desc = dwc_desc_get(dwc); if (!desc) { - dev_err(&chan->dev, + dev_err(chan2dev(chan), "not enough descriptors available\n"); goto err_desc_get; } @@ -653,7 +662,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, first = desc; } else { prev->lli.llp = desc->txd.phys; - dma_sync_single_for_device(chan->dev.parent, + dma_sync_single_for_device(chan2parent(chan), prev->txd.phys, sizeof(prev->lli), DMA_TO_DEVICE); @@ -673,7 +682,7 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, prev->lli.ctllo |= DWC_CTLL_INT_EN; prev->lli.llp = 0; - dma_sync_single_for_device(chan->dev.parent, + dma_sync_single_for_device(chan2parent(chan), prev->txd.phys, sizeof(prev->lli), DMA_TO_DEVICE); @@ -768,11 +777,11 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan) u32 cfghi; u32 cfglo; - dev_vdbg(&chan->dev, "alloc_chan_resources\n"); + dev_vdbg(chan2dev(chan), "alloc_chan_resources\n"); /* ASSERT: channel is idle */ if (dma_readl(dw, CH_EN) & dwc->mask) { - dev_dbg(&chan->dev, "DMA channel not idle?\n"); + dev_dbg(chan2dev(chan), "DMA channel not idle?\n"); return -EIO; } @@ -808,7 +817,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan) desc = kzalloc(sizeof(struct dw_desc), GFP_KERNEL); if (!desc) { - dev_info(&chan->dev, + dev_info(chan2dev(chan), "only allocated %d descriptors\n", i); spin_lock_bh(&dwc->lock); break; @@ -818,7 +827,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan) desc->txd.tx_submit = dwc_tx_submit; desc->txd.flags = DMA_CTRL_ACK; INIT_LIST_HEAD(&desc->txd.tx_list); - desc->txd.phys = dma_map_single(chan->dev.parent, &desc->lli, + desc->txd.phys = dma_map_single(chan2parent(chan), &desc->lli, sizeof(desc->lli), DMA_TO_DEVICE); dwc_desc_put(dwc, desc); @@ -833,7 +842,7 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan) spin_unlock_bh(&dwc->lock); - dev_dbg(&chan->dev, + dev_dbg(chan2dev(chan), "alloc_chan_resources allocated %d descriptors\n", i); return i; @@ -846,7 +855,7 @@ static void dwc_free_chan_resources(struct dma_chan *chan) struct dw_desc *desc, *_desc; LIST_HEAD(list); - dev_dbg(&chan->dev, "free_chan_resources (descs allocated=%u)\n", + dev_dbg(chan2dev(chan), "free_chan_resources (descs allocated=%u)\n", dwc->descs_allocated); /* ASSERT: channel is idle */ @@ -867,13 +876,13 @@ static void dwc_free_chan_resources(struct dma_chan *chan) spin_unlock_bh(&dwc->lock); list_for_each_entry_safe(desc, _desc, &list, desc_node) { - dev_vdbg(&chan->dev, " freeing descriptor %p\n", desc); - dma_unmap_single(chan->dev.parent, desc->txd.phys, + dev_vdbg(chan2dev(chan), " freeing descriptor %p\n", desc); + dma_unmap_single(chan2parent(chan), desc->txd.phys, sizeof(desc->lli), DMA_TO_DEVICE); kfree(desc); } - dev_vdbg(&chan->dev, "free_chan_resources done\n"); + dev_vdbg(chan2dev(chan), "free_chan_resources done\n"); } /*----------------------------------------------------------------------*/ diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c index 46e0128..ca70a21 100644 --- a/drivers/dma/fsldma.c +++ b/drivers/dma/fsldma.c @@ -822,7 +822,7 @@ static int __devinit fsl_dma_chan_probe(struct fsl_dma_device *fdev, */ WARN_ON(fdev->feature != new_fsl_chan->feature); - new_fsl_chan->dev = &new_fsl_chan->common.dev; + new_fsl_chan->dev = &new_fsl_chan->common.dev->device; new_fsl_chan->reg_base = ioremap(new_fsl_chan->reg.start, new_fsl_chan->reg.end - new_fsl_chan->reg.start + 1); diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 1419a50..d6b6bff 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -113,7 +113,7 @@ struct dma_chan_percpu { * @device: ptr to the dma device who supplies this channel, always !%NULL * @cookie: last cookie value returned to client * @chan_id: channel ID for sysfs - * @class_dev: class device for sysfs + * @dev: class device for sysfs * @refcount: kref, used in "bigref" slow-mode * @slow_ref: indicates that the DMA channel is free * @rcu: the DMA channel's RCU head @@ -128,7 +128,7 @@ struct dma_chan { /* sysfs */ int chan_id; - struct device dev; + struct dma_chan_dev *dev; struct list_head device_node; struct dma_chan_percpu *local; @@ -136,7 +136,20 @@ struct dma_chan { int table_count; }; -#define to_dma_chan(p) container_of(p, struct dma_chan, dev) +/** + * struct dma_chan_dev - relate sysfs device node to backing channel device + * @chan - driver channel device + * @device - sysfs device + */ +struct dma_chan_dev { + struct dma_chan *chan; + struct device device; +}; + +static inline const char *dma_chan_name(struct dma_chan *chan) +{ + return dev_name(&chan->dev->device); +} void dma_chan_cleanup(struct kref *kref); -- cgit v0.10.2 From 864498aaa9fef69ee166da023d12413a7776342d Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:21 -0700 Subject: dmaengine: use idr for registering dma device numbers This brings some predictability to dma device numbers, i.e. an rmmod/insmod cycle may now result in /sys/class/dma/dma0chan0 being restored rather than /sys/class/dma/dma1chan0 appearing. Cc: Maciej Sosnowski Signed-off-by: Dan Williams diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 93c4c9a..dd43410 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -57,10 +57,12 @@ #include #include #include +#include static DEFINE_MUTEX(dma_list_mutex); static LIST_HEAD(dma_device_list); static long dmaengine_ref_count; +static struct idr dma_idr; /* --- sysfs implementation --- */ @@ -147,6 +149,12 @@ static void chan_dev_release(struct device *dev) struct dma_chan_dev *chan_dev; chan_dev = container_of(dev, typeof(*chan_dev), device); + if (atomic_dec_and_test(chan_dev->idr_ref)) { + mutex_lock(&dma_list_mutex); + idr_remove(&dma_idr, chan_dev->dev_id); + mutex_unlock(&dma_list_mutex); + kfree(chan_dev->idr_ref); + } kfree(chan_dev); } @@ -611,9 +619,9 @@ EXPORT_SYMBOL(dmaengine_put); */ int dma_async_device_register(struct dma_device *device) { - static int id; int chancnt = 0, rc; struct dma_chan* chan; + atomic_t *idr_ref; if (!device) return -ENODEV; @@ -640,9 +648,20 @@ int dma_async_device_register(struct dma_device *device) BUG_ON(!device->device_issue_pending); BUG_ON(!device->dev); + idr_ref = kmalloc(sizeof(*idr_ref), GFP_KERNEL); + if (!idr_ref) + return -ENOMEM; + atomic_set(idr_ref, 0); + idr_retry: + if (!idr_pre_get(&dma_idr, GFP_KERNEL)) + return -ENOMEM; mutex_lock(&dma_list_mutex); - device->dev_id = id++; + rc = idr_get_new(&dma_idr, NULL, &device->dev_id); mutex_unlock(&dma_list_mutex); + if (rc == -EAGAIN) + goto idr_retry; + else if (rc != 0) + return rc; /* represent channels in sysfs. Probably want devs too */ list_for_each_entry(chan, &device->channels, device_node) { @@ -659,6 +678,9 @@ int dma_async_device_register(struct dma_device *device) chan->dev->device.class = &dma_devclass; chan->dev->device.parent = device->dev; chan->dev->chan = chan; + chan->dev->idr_ref = idr_ref; + chan->dev->dev_id = device->dev_id; + atomic_inc(idr_ref); dev_set_name(&chan->dev->device, "dma%dchan%d", device->dev_id, chan->chan_id); @@ -971,6 +993,7 @@ EXPORT_SYMBOL_GPL(dma_run_dependencies); static int __init dma_bus_init(void) { + idr_init(&dma_idr); mutex_init(&dma_list_mutex); return class_register(&dma_devclass); } diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index d6b6bff..64dea2a 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -140,10 +140,14 @@ struct dma_chan { * struct dma_chan_dev - relate sysfs device node to backing channel device * @chan - driver channel device * @device - sysfs device + * @dev_id - parent dma_device dev_id + * @idr_ref - reference count to gate release of dma_device dev_id */ struct dma_chan_dev { struct dma_chan *chan; struct device device; + int dev_id; + atomic_t *idr_ref; }; static inline const char *dma_chan_name(struct dma_chan *chan) -- cgit v0.10.2 From e2346677af86150c6083974585c131e8a2c3ddcc Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:21 -0700 Subject: dmaengine: advertise all channels on a device to dma_filter_fn Allow dma_filter_fn routines to disambiguate multiple channels on a device rather than assuming that all channels on a device are equal. Cc: Maciej Sosnowski Reported-by: Guennadi Liakhovetski Signed-off-by: Dan Williams diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index dd43410..9d3594c 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -454,10 +454,10 @@ static void dma_channel_rebalance(void) } } -static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev) +static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_device *dev, + dma_filter_fn fn, void *fn_param) { struct dma_chan *chan; - struct dma_chan *ret = NULL; if (!__dma_device_satisfies_mask(dev, mask)) { pr_debug("%s: wrong capabilities\n", __func__); @@ -479,11 +479,15 @@ static struct dma_chan *private_candidate(dma_cap_mask_t *mask, struct dma_devic __func__, dma_chan_name(chan)); continue; } - ret = chan; - break; + if (fn && !fn(chan, fn_param)) { + pr_debug("%s: %s filter said false\n", + __func__, dma_chan_name(chan)); + continue; + } + return chan; } - return ret; + return NULL; } /** @@ -496,22 +500,13 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v { struct dma_device *device, *_d; struct dma_chan *chan = NULL; - bool ack; int err; /* Find a channel */ mutex_lock(&dma_list_mutex); list_for_each_entry_safe(device, _d, &dma_device_list, global_node) { - chan = private_candidate(mask, device); - if (!chan) - continue; - - if (fn) - ack = fn(chan, fn_param); - else - ack = true; - - if (ack) { + chan = private_candidate(mask, device, fn, fn_param); + if (chan) { /* Found a suitable channel, try to grab, prep, and * return it. We first set DMA_PRIVATE to disable * balance_ref_count as this channel will not be @@ -529,10 +524,8 @@ struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, v dma_chan_name(chan), err); else break; - } else - pr_debug("%s: %s filter said false\n", - __func__, dma_chan_name(chan)); - chan = NULL; + chan = NULL; + } } mutex_unlock(&dma_list_mutex); -- cgit v0.10.2 From 652afc27b26859a0ea5f6db681d80b83d2c43cf8 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:22 -0700 Subject: dmaengine: bump initcall level to arch_initcall There are dmaengine users that would like to register dma devices at subsys_initcall time to ensure channels are available by device_initcall time. Cc: Maciej Sosnowski Cc: Guennadi Liakhovetski Cc: Nicolas Ferre Signed-off-by: Dan Williams diff --git a/drivers/dca/dca-core.c b/drivers/dca/dca-core.c index d883e1b..5543384 100644 --- a/drivers/dca/dca-core.c +++ b/drivers/dca/dca-core.c @@ -270,6 +270,6 @@ static void __exit dca_exit(void) dca_sysfs_exit(); } -subsys_initcall(dca_init); +arch_initcall(dca_init); module_exit(dca_exit); diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 9d3594c..403dbe7 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -318,7 +318,7 @@ static int __init dma_channel_table_init(void) return err; } -subsys_initcall(dma_channel_table_init); +arch_initcall(dma_channel_table_init); /** * dma_find_channel - find a channel to carry out the operation @@ -990,6 +990,6 @@ static int __init dma_bus_init(void) mutex_init(&dma_list_mutex); return class_register(&dma_devclass); } -subsys_initcall(dma_bus_init); +arch_initcall(dma_bus_init); -- cgit v0.10.2 From b9bdcbba010c2e49c8f837ea7a49fe006b636f41 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 6 Jan 2009 11:38:22 -0700 Subject: ioat: fix self test for multi-channel case In the multiple device case we need to re-arm the completion and protect against concurrent self-tests. The printk from the test callback is removed as it can arbitrarily delay completion of the test. Cc: Cc: Maciej Sosnowski Signed-off-by: Dan Williams diff --git a/drivers/dma/ioat_dma.c b/drivers/dma/ioat_dma.c index e42e1ae..b3759c4 100644 --- a/drivers/dma/ioat_dma.c +++ b/drivers/dma/ioat_dma.c @@ -1340,12 +1340,11 @@ static void ioat_dma_start_null_desc(struct ioat_dma_chan *ioat_chan) */ #define IOAT_TEST_SIZE 2000 -DECLARE_COMPLETION(test_completion); static void ioat_dma_test_callback(void *dma_async_param) { - printk(KERN_ERR "ioatdma: ioat_dma_test_callback(%p)\n", - dma_async_param); - complete(&test_completion); + struct completion *cmp = dma_async_param; + + complete(cmp); } /** @@ -1362,6 +1361,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device) dma_addr_t dma_dest, dma_src; dma_cookie_t cookie; int err = 0; + struct completion cmp; src = kzalloc(sizeof(u8) * IOAT_TEST_SIZE, GFP_KERNEL); if (!src) @@ -1401,8 +1401,9 @@ static int ioat_dma_self_test(struct ioatdma_device *device) } async_tx_ack(tx); + init_completion(&cmp); tx->callback = ioat_dma_test_callback; - tx->callback_param = (void *)0x8086; + tx->callback_param = &cmp; cookie = tx->tx_submit(tx); if (cookie < 0) { dev_err(&device->pdev->dev, @@ -1412,7 +1413,7 @@ static int ioat_dma_self_test(struct ioatdma_device *device) } device->common.device_issue_pending(dma_chan); - wait_for_completion_timeout(&test_completion, msecs_to_jiffies(3000)); + wait_for_completion_timeout(&cmp, msecs_to_jiffies(3000)); if (device->common.device_is_tx_complete(dma_chan, cookie, NULL, NULL) != DMA_SUCCESS) { -- cgit v0.10.2